"However," she goes on to postulate, "in recent literature, the precise account he offered of implicature recovery has been questioned and alternative accounts have emerged. In this paper, I examine three such alternative accounts. My main aim is to show that the two most popular accounts in the current literature (the default inference view and the relevance theoretic approach) still face significant problems. I will then conclude by suggesting that an alternative account, emerging from semantic minimalism, is best placed to accommodate Grice’s distinction."
"Grice’s distinction between what is said by a sentence and what is implicated by an utterance of that sentence is, of course, extremely familiar. It is also almost universally accepted. However, in recent literature, the precise account he offered of implicature recovery has been questioned and alternative accounts, emerging from different semantic programmes, have emerged. In this paper, I would like to examine three such alternative accounts. My main aim is to show that the two most popular accounts in the current literature (the default inference view and the relevance theoretic approach) still face significant problems. If this is right then there is a reason to look for a third alternative and in conclusion I’ll suggest that it is the approach emerging from so-called semantic minimalism which is best placed to accommodate Grice’s fundamental distinction between what a sentence means and what utterances of it implicate."
"To keep things relatively constrained Borg focuses her attention on three desiderata which it seems plausible a successful theory of implicature should meet. I’ll assume (pretty much without argument) that any successful account should either accommodate or explain away (A-C):
A) the intuitive difference Grice identified between generalized and particularized conversational implicatures
B) the putative semantic (truth-conditional) relevance of some implicatures
C) recent experimental data concerning implicature recovery
Condition (A) marks the fact that Grice’s distinction does seem intuitively appealing. Condition (B), as we will see, arises out of an apparent problem with the original Gricean framework, hence it would seem natural to expect any theory seeking to supplant Grice to avoid, or in some way defuse, this worry. Finally, the third condition (which I take to be the strongest constraint here) is pretty self-evident: any theory of implicature which is concerned with modelling the actual processes underpinning implicature recovery in ordinary speakers must be open to confirmation or rejection in the light of empirical evidence. Given these desiderata, then, the structure of the paper is as follows: I’ll begin in §1 by recapping on the Gricean approach and outline one apparent problem that it faces (which yields desideratum (B) above). §§2-3 will then be concerned with the negative aim of showing that the two most popular accounts of implicature to be found in the recent literature fail to satisfy the desiderata given above. Finally, in §4, I’ll sketch the approach of semantic minimalism and suggest that it provides a more plausible theory of implicature.
According to Grice, two very different elements combine to make up the total signification of an utterance. As is well-known, his fundamental divide came between what is said and what is implicated. ‘What is said (by a sentence)’ is a technical term for Grice and is usually held to be determined by the syntactic constituents of that sentence together with the processes of disambiguation and reference determination for context-sensitive expressions. Although Grice steered clear of the terminology of semantics and pragmatics, it is common to see this notion of what is said by a sentence as lining up with literal, semantic content for sentences, while what is implicated by an utterance is taken to line up with pragmatic meaning. Within the category of implicature Grice also recognises distinct kinds, first distinguishing conventional (or non-conversational) and conversational implicatures, and then distinguishing generalised from particularised conversational implicatures. For purposes of this paper it is the latter distinction which will be of special interest to us. Conversational implicatures in general are those propositions which a hearer is required to assume in order to preserve her view of the speaker as a cooperative partner in communication. Thus, to put things rather more formally, according to Grice, a speaker S conversationally implicates that q by saying that p only if:
(i) S is presumed to be following the conversational maxims, or at least the Cooperative Principle;
(ii) the supposition that S is aware that (or thinks that) q is required to make S’s saying or making as if to say p consistent with this presumption;
(iii) the speaker thinks (and would expect the hearer to think that the speaker thinks) that it is within the competence of the hearer to work out, or grasp intuitively, that the supposition mentioned in (ii) is required. (Grice 1989: 30-31)
We can see the way in which the Gricean account of conversational implicature works, and appreciate the difference between generalised and particularised implicatures, by looking at an example.
Imagine that Jack and Jill are planning a conference and Jack wants to know when the conference should start. Jill responds by saying:
1) Some delegates will attend at 8.30am.
Here it seems that Jill’s utterance licenses both a generalised and a particularised conversational implicature. Her use of the term ‘some’ warrants a GCI because it is a weaker term on a scale (
Things are very different with the class of particularised conversational implicatures which are intimately bound up with specific contexts of utterance. For instance, in the example above, if it is common knowledge amongst the parties that not every delegate will attend every talk, then Jill implicates that we could start the conference at 8.30am. This further implicature is generated through observation of the maxim of relevance: Jill’s utterance should be relevant to Jack’s inquiry about the start time for the conference. However, that this further proposition is conveyed is a feature of this particular context of utterance – were Jill to utter the same words in a different context the implicature might change or disappear. Intuitively, at least, this Gricean distinction between GCIs and PCIs seems an appealing one, capturing a real difference in the way that implicated propositions are conveyed. However, despite the intuitive appeal of the Gricean framework for implicature, it does face problems.
One well-known objection to the Gricean framework (whereby what is said by a sentence is taken to be determined independently of what is implicated by an utterance of that sentence) concerns the claim that some implicatures can in fact contribute to what is said by a sentence (i.e. to the literal truth-conditions of the utterance). So for instance, consider (2):
2) Jill blew the whistle on poor practices at work and was sacked
According to Grice, the meaning of ‘and’ is given by the logical conjunction, thus what is said by (2) is simply that two events took place, it does not assert any temporal or causal connection between them. Any such additional features arise only as a GCI connected to the use of the term ‘and’. Thus the GCI attaching to (2) is:
2*) Jill blew the whistle on poor practices at work and was sacked because of that
However, when we embed (2) in a larger context, it can seem compelling to think that the causal element is truth-conditionally relevant. Thus the truth-conditions of (3) and (4) seem to differ:
3) If Jill blew the whistle on poor practices at work and was sacked, then she is entitled to compensation.
4) If Jill was sacked and blew the whistle on poor practices at work, then she is entitled to compensation.
Prima facie, at least, it is hard to see how to reconcile cases like this with the Gricean framework where literal meaning proceeds implicature derivation and thus where what is said by a sentence is independent of any implicature. Given this worry with the Gricean account, we might instead think to look for an alternative theory of implicature, one which can accommodate the apparent truth-conditional relevance of these generalised conversational implicatures. In the next two sections I will explore two such proposals.
The Default Interpretation View: Levinson 2000, Chierchia 2004
In his book Presumptive Meanings, Levinson is concerned to draw apart three distinct levels of meaning. There is the level of sentence-type meaning (akin to the Gricean notion of what is said by a sentence) and there is utterance-token meaning (the full interpretation of a linguistic act), but there is also utterance-type meaning. This last category is the level of generalised conversational implicature. Levinson writes that GCIs ‘are carried by the structure of utterances, given the structure of language, and not by virtue of the particular contexts of utterances’ (2000: 1). That is to say, GCIs form autonomous, default inferences, resting on quite general features of the usage of certain sub-sentential locutions. So, for instance, it is simply a feature of the default usage rule associated with the term ‘some’ that it licenses an automatic, heuristic driven inference to ‘not all’ (via what Levinson terms the ‘Q-heuristic’: what isn’t said isn’t the case). This is very different to the class of PCIs which are nonce inferences: potentially one-off, context-sensitive inferences, driven by close consideration of a particular speaker’s aims and intentions. Clearly, then, it is an in-built feature of Levinson’s account that it meets desideratum (A): it preserves Grice’s intuitive distinction between GCIs and PCIs.
Furthermore, Levinson is at pains to accommodate desideratum (B): to give an account of the truth-conditional relevance of certain implicatures. His way of meeting this desideratum is to specify so-called ‘intrusive’ constructions, which allow embedded expressions to contribute not only their literal meaning to the truth-conditions of the whole but also any implicatures that the expressions carry (2000: 213-4). So for instance, conditional constructions may take account not only of the literal truth-conditions of embedded sentences but also the default implicatures they generate. In this way the intuitive difference between claims like (5) and (6) is explained because the temporal/causal element associated with ‘and’ is incorporated into what is said by the larger conditional sentence.
However, although Levinson does seek to address our second desideratum, it is not entirely clear that his account is successful in this respect. One initial worry with the account he provides is the extent to which it gives us a genuine explanation of the phenomenon in question versus a re-description of it. The worry is that, unless we have an independent characterisation of what ‘intrusive constructions’ are, and how and why they function in the way supposed, it may appear that we are simply re-describing the problem phenomenon rather than taking a genuine step down the explanatory road. Be this as it may, however, the account also faces a much more concrete challenge. In her 2004 review of Levinson’s book, Robyn Carston notes that, given Levinson’s notion of intrusive constructions, certain intuitively valid arguments will be rendered invalid. The problem is that the truth-conditions of the antecedent of a conditional like ‘p q’ may differ from the truth-conditions of any unembedded instance of it, since in the more complex embedded setting the truth-conditions of the antecedent pick up on the GCIs of the unembedded ‘p’. Levinson’s account apparently predicts, then, that despite appearances to the contrary, many arguments which we take to be uncontroversial instances of modus ponens actually have the invalid argument form ‘p, r q, therefore q’. As Carston notes, a theory which entails this outcome seems unacceptable.
Finally, it is not clear that Levinson’s account can meet condition (C), concerning the experimental data relating to implicature recovery (this point is noted by Carston, but discussed at more length in papers by Ira Noveck). The problem is that experiments seem to show that the logical readings of scalar implicatures are accessed more quickly than pragmatic readings, yet such findings undermine the claim that it is the pragmatic reading which provides the default interpretation in these cases. So, for instance, in experiments conducted by Bott and Noveck 2004, where participants were given explicit instructions for interpreting scalar terms (e.g. ‘treat “some” as meaning “at least one and possibly all’” versus ‘treat “some” as meaning “at least one but not all”’), those who were required to give logical interpretations to scalar terms responded with truth-value judgements for sentences such as ‘Some elephants are mammals’ much more quickly than those required to give pragmatic readings. Furthermore, in experiments where no instruction was given and participants had free-choice over logical or pragmatic readings, the time delay effects between the logical and the pragmatic readings remained. So, shown the sentence ‘some elephants are mammals’ a response of ‘true’, indicating a logical reading for ‘some’, was quicker than a response of ‘false’, indicating a pragmatic some but not all reading. Finally, if participants were placed under time constraints in providing truth-evaluations for sentences containing scalar terms they were far more likely to respond with the logical reading, while when a delay prior to response was imposed they were more likely to respond with the pragmatic reading. What all these experiments point to, then, is the status of the pragmatic reading not as a default interpretation but as a delayed interpretation. As Noveck 2004: 314 notes: ‘there are no indications that turning some into some but not all is an effortless step’.
Prima facie, then, Levinson’s default inference account faces problems with meeting both our second and third desiderata. Given this, many theorists have rejected the default inference view, preferring a more context-sensitive account of implicature recovery of the kind we will look at in §3. However, before we move on to this kind of alternative account we should note that Levinson’s theory is not the only version of the default inference view currently on the table. For Gennaro Chierchia has proposed a radically different version of the default view which we need to consider.
The Default Interpretation View: Chierchia 2004
Although Chierchia is in agreement with Levinson concerning the default status of the enriched reading of scalar terms, the mechanism underpinning their default status is very different for Chierchia. He argues that ‘pragmatic computations and grammar-driven ones are “interspersed.” Implicatures are not computed after truth conditions of (root) sentences have been figured out; they are computed phrase by phrase in tandem with truth conditions (or whatever compositional semantics computes)…[C]ontrary to the dominant view, SIs [scalar implicatures] are introduced locally and projected upward in a way that mirrors standard semantic recursion’ (Chierchia 2004: 40). Chierchia retains the defeasibility of such implicatures by noting that they are cancelled only in certain semantic environments (roughly, those which are downward entailing). Thus, his idea is that the enriched reading is always calculated at the local level though it may be subsequently removed at a more global level if the scalar term is found to be embedded in the wrong kind of semantic environment. Now the details of the account are fairly complex and we might question whether such a radical revision of the way in which pragmatics affects semantics is really motivated by the phenomenon of scalar implicature alone (see Horn 2006 for an argument along these lines). However, it is also clear that, on Chierchia’s version of the default inference view, many of the challenges from the previous section fall away. For instance, the intuitive worry that the notion of ‘intrusive constructions’ provides a re-labelling rather than an explanation of the semantic relevance phenomenon is avoided here. Clearly, if scalar implicatures build into the meaning of complex constructions in a way that parallels standard recursion for grammatical content, then we have a good explanation of why they are relevant to the truth-conditions of phrases in which scalar terms are embedded. Furthermore, Carston’s challenge that certain intuitively valid arguments get rendered as invalid also fails to hold against Chierchia’s position, because since implicatures build in to semantic content at the local level, there will be no difference in content between a premise containing a scalar term and an instance of that premise embedded in a conditional phrase.
Clearly, then, Chierchia’s account fares better in these respects than Levinson’s. However, although he avoids Carston’s challenge, there are other argument forms where the account seems less compelling. For instance, since the pragmatically enhanced reading is generated at the local level, alongside grammatical content, and feeds into the compositional semantics in exactly the same way, it would seem that ordinary speakers should always work with the enriched reading (unless the context is a defeating one). Yet this would seem to suggest that ordinary speakers should find instances of the following argument valid:
5) Some elephants are mammals
6) There is at least one elephant which is not a mammal
Yet it seems they do not. Now, there may be further moves for this version of the default view to make but, at least prima facie, the apparent invalidity of arguments like the above seem to suggest that we should keep literal semantic content and pragmatically determined implicatures further apart than does Chierchia’s account. Most seriously, however, it seems that Cheirchia’s default view, just like Levinson’s, is at a loss to explain the time-delay data canvassed above. If the enriched reading is default then it should be quicker to recover than logical readings, yet this is not what the data shows.
So-called "Relevance Theory".
According to so-called "Relevance Theory" (after Grice), "there are three possible stages or levels of utterance interpretation."
the recovery of (usually) incomplete logical forms, the expansion or completion of these logical forms to yield explicatures (what is said by the utterance), and finally the further derivation of any implicatures. Both the construction of explicatures and the derivation of implicatures are driven by relevance theoretic principles, i.e. the drive to recover the optimal interpretation in terms of a balance between cognitive effects and processing effort (Sperber and Wilson 1986: 47-50). Once the hearer finds an interpretation which crosses the relevance threshold, she is licensed in taking it to be what the speaker intended to communicate. So, for RT, it is a mistake to think of either the logical or the pragmatic reading as the default interpretation of a scalar item. Although the lexical entry for the expression ‘some’ is given by the weaker logical interpretation, whether an utterance containing the expression is interpreted as conveying the logical or the pragmatic interpretation will be a matter of the relevance constraints in that particular context. Whichever interpretation crosses the relevance threshold in that context of utterance will constitute the correct interpretation.
Now one feature of the RT view is that it rests no theoretical weight on the original Gricean distinction between GCIs and PCIs. All implicatures now stand together simply as further possible inferences to be drawn (via relevance theoretic mechanisms) on the basis of the explicature; thus the account does not accommodate desideratum (A). However, the first of our conditions of success for an account of implicature is the weakest of the three; if an account deals well with requirements (B) and (C) without positing a difference in types of implicature we should be prepared to relinquish Grice’s distinction, despite its intuitive appeal. So, how does RT fare with respect to (B) and (C)?
At least initially, RT seems to deal very well with desideratum (B) – the semantic relevance constraint – since it is a raison d’être of the account in general that it allows for pragmatically contributed elements to contribute to the first truth-evaluable proposition to which the utterance gives rise (the explicature). Hence it should be no surprise at all to find that a pragmatically contributed strengthening of a scalar term contributes to the literal truth-conditions of an utterance. So, for instance, in (3) above, since the pragmatic reading of the antecedent is required to support the consequent, RT predicts that this will indeed be the recovered interpretation. What is said by this utterance is the pragmatically enhanced reading: if Jill blew the whistle on poor practices at work and was sacked because of that then she is entitled to compensation.
However, although the solution is successful in this case, there do seem to be cases which cause problems for the relevance theoretic account. For, as Larry Horn 2006 notes, the recovery of the stronger, pragmatic interpretation can occur as a ‘retroactive effect’, rather than something which occurs at the ‘first pass’ interpretation of the antecedent. Horn discusses the example:
7) If some of my friends come to the party, I’ll be happy—but if all of them do, I’ll be in trouble
and comments 2006: 27 that ‘it’s only when the stronger scalar is reached that the earlier, weaker one is retroactively adjusted to accommodate an upper bound into its semantics, e.g. with some being REinterpreted as expressing (rather than merely communicating) “some but not all”’. Yet this seems to suggest that a hearer will be unable to arrive at the explicature of the first sentence until she has waited to see if a stronger triggering item is delivered later.
Now we should note that RT theorists do explicitly allow for a kind of retroactive effect in their account. For they do not posit the serial derivation of first what is said and then its implicatures, as we had on the Gricean picture, instead the two are derived in parallel and each may contribute to the other. Thus Carston 2004b: 8-9 notes: ‘The assumption…that “what is said” is determined prior to the derivation of conversational implicatures is [also] relaxed and the two levels of communicated content are taken to be derived in parallel via a mechanism of “mutual adjustment”, so that, for instance, an interpretative hypothesis about an implicature might lead, through a step of backwards inference, to a particular adjustment of explicit content’; see also Carston 2002, §2.3.4. However, the cases Horn discusses cannot be treated as instances of mutual adjustment, for it is clearly possible that the trigger for Horn’s kind of retroactive reassessment could come some considerable time after the initial production of the scalar term. Yet it seems problematic to think that the explicit content of the original utterance cannot be determined until we have assured ourselves that no strengthening triggers will emerge anywhere further down the line. (And yet we know that the strengthened reading is not the one to be given by default, since RT opposes the default interpretation view.) In these kinds of cases, then, it seems that RT faces something of a dilemma: to accommodate truth-conditional relevance it seems that we need the pragmatically strengthened reading of the first antecedent in (7) to count as the explicature, but given RT’s independent acceptance of the idea that the logical reading gives the explicature in at least some cases, it seems quite possible that the context for the antecedent of (7) only supports the logical reading as an explicature.
One option RT might pursue in the face of these kinds of cases is to hold that in retroactive cases like (7) the explicature of the utterance really is given by the weaker, logical reading, while the pragmatically strengthened reading is generated as some kind of (retroactively triggered) meta-representation of the original utterance. A worry, however, with any such move is that it threatens to undermine the RT account of putative truth-conditional relevance (since it seems that some additional account of the putative truth-conditional relevance of the meta-representation will then be needed). Even given this, however, it is not at all obvious that this line of response is in fact open to RT. The problem here turns on the conditions they offer for determining which pragmatic features count as part of the explicature and which contribute to the implicatures. It seems that there are two conditions which explicature content must fulfill: first, it must be an expansion of the logical form of the utterance and, second, it must satisfy the ‘scope principle’. According to this scope principle a pragmatically determined aspect of meaning is part of what is said if it falls within the scope of logical operators such as negation and conditionals. Clearly a pragmatic strengthening of a scalar term satisfies the first of these two conditions: it certainly seems to be an expansion of the logical form, since it enriches the interpretation of a lexical constituent of the utterance. However, it also seems that (at least on the current account) it satisfies the second condition: if a context is such as to give rise to a pragmatic enrichment of a scalar term in a given utterance then it seems that that pragmatically enriched reading will also fall under any logical operators. So, it seems that either a pragmatically enriched reading shouldn’t be generated at all (so that the explicature of ‘some’ is given just by the logical reading) or a context supports the pragmatically strengthened reading and this then constitutes the explicature of the utterance. It is thus not clear how, according to relevance theory, a context could be such as to support the pragmatic reading of a scalar term and yet that reading could then fail to count as an ordinary part of the explicature. Yet unless this is possible the option of treating the pragmatic reading of ‘some’ in (9) as some kind of metarepresentation, distinct from the ordinary explicature, seems closed to RT. ,
Turning now to condition (C) – that the theory fit with experimental evidence concerning implicature recovery. Once again, it might seem that RT is in a good position to account for this condition. Recall that, for RT, it is perfectly plausible that in many contexts it is the logical interpretation which first crosses the relevance threshold. Thus, where hearers go on to recover a pragmatically enriched interpretation this will be the result of further processing. In these cases, the pragmatic reading is effortful and thus the prediction that the pragmatic reading should take longer to recover seems to follow naturally. Thus as Bott and Noveck (2004: manuscript version p.42 page reference?) suggest: ‘Relevance Theory would argue that inferences arise as a function of effort…Thus, according to Relevance Theory, the logical response ought to be faster than a pragmatic response.’
However, on closer inspection, it is not so clear that RT does fit well with the experimental findings. A first worry is that, on the RT view, the pragmatic reading can form the explicature of an utterance (i.e. the first interpretation which provides sufficient cognitive effects at the cheapest cognitive cost). Thus, if the recovery of enriched scalar terms is held to be effortful and time-consuming, so too should the recovery of other kinds of pragmatically enriched explicatures. Yet it seems unlikely that RT theorists would want explicature recovery in general to be marked by time-delays. For instance, consider :
8) ‘The apple is red’
(a) The apple is red (in some way)
(b) The apple is red on most of its skin
9) ‘Everyone was sick’
(a) Everyone was sick
(b) Everyone at the party was sick
In each of these cases, in the right context, the (b) interpretation would form the pragmatically enhanced explicature of the original utterance, with (a) providing an interpretation which is free from pragmatic effects. Yet it seems extremely unlikely that advocates of RT would want to predict that the (b) interpretations are recovered more slowly than the (a) interpretations; thus the explanation of the time delay data on GCIs seems threatened. Of course, advocates of RT either need to hold that time delay effects will be present in the recovery of all pragmatically enhanced explicatures or that the pragmatically enhanced explicatures in (8) and (9) are in some way different to those recovered via scalar implicatures. Pursuing this second disjunct might allow advocates of RT to accommodate the time delay data appropriately, but someone pursuing this line obviously still owes us an account of what the relevant difference here is.
A second worry with RT’s explanation of the experimental data concerns children’s recovery of scalar implicatures. As many theorists have observed, children are less likely to give pragmatic interpretations to scalar items; instead, in this respect at least, children turn out to be much more logical than adults (Rips 1975, Noveck 2001, Papafragou & Musolino 2003). Yet it is at least prima facie unclear how this data is to be accommodated on the current view, for RT predicts that both adults and children are in possession of the right reasoning processes to generate both explicatures and implicatures (i.e. both possess the same relevance-directed mechanisms). Furthermore, children are as good as adults in utilizing these processes in most other respects. For instance, it is the RT mechanisms which, ex hypothesi, explain how putative semantic underdetermination is to be overcome (i.e. how an utterance of ‘I’ve eaten’ gets interpreted in a context as explicitly meaning that I’ve eaten recently), yet children overcome semantic underdetermination just as well as adults. So it looks, prima facie, as if RT owes us an account of why children fail to use the comprehension mechanisms they do supposedly possess in just these cases. That is to say, why does a child who can move from ‘The apple is red’ to the apple is red on the outside via RT mechanisms fail to move from ‘Some children like sprouts’ to at least one but not all children like sprouts via identical mechanisms?
To summarize: RT doesn’t place any theoretical weight on the distinction between GCIs and PCIs. While this was one of our original desiderata, it was the weakest of the three and if RT proved successful in accommodating the other two, this wouldn’t be much of a problem. However, I’ve tried to suggest that it is far from clear that RT does offer a successful account of the two remaining conditions. First, the account seems to face problems with explaining the putative truth-conditional relevance of implicatures due to the possible retrospective nature of pragmatic interpretation. Since we may end up revising our interpretation of a scalar term at any later point in the conversation it seems that either RT must maintain that the explicature of the utterance is not settled until we have made sure that no strengthening triggers occur at any later point (an unappealing suggestion) or it requires some alternative account of the apparent truth-conditional relevance of the pragmatically strengthened reading in cases like Horn’s. Secondly, it seems that there are problems accounting for the experimental data since RT predicts that where GCI effects occur they can contribute to the explicature of the utterance, yet it is at best a moot point as to whether RT wants to be committed to predicting similar time delay effects for the recovery of all pragmatically enhanced explicatures. Furthermore, it is not immediately clear how the account copes with the difference between GCI recovery in adults and children, since it predicts that exactly the same mechanism is responsible for GCI and other forms of explicature recovery in adults and children, yet in many other cases of explicature recovery children seem unimpaired. Given these worries, then, I think we are licensed in turning to our third and final account.
According to semantic minimalism, there is, just as Grice predicted, a minimal sentence meaning to be recovered from a constrained set of pragmatic processes (such as disambiguation and reference assignment for indexicals and demonstratives). This minimal meaning (minimalists tend not to call it ‘what is said’ since they conceive of ‘saying’ as a thoroughly pragmatic notion) can diverge to a greater or lesser extent from what a speaker conveys by the utterance of the sentence in a given context, so minimalism fully embraces the fundamental Gricean distinction between literal sentence meaning on the one hand and speaker meaning on the other. Furthermore, minimalists agree with Grice that an interpretation of ‘Some Fs are G’ as some but not all Fs are G lies on the implicature side of the divide, going beyond the content recoverable by lexical information and rules of composition alone. According to semantic minimalism, semantic content is marked out by one (or both) of the following features:
i) every contextual contribution to semantic content is syntactically triggered.
ii) every contextual contribution to semantic content is formally tractable.
Concentration on (i) yields the kind of minimalism defended by Cappelen and Lepore 2005, where any relevant feature of the context can be brought into the semantic domain by an appropriate syntactic cue. On the other hand, a combination of (i) and (ii) yields the kind of minimalism defended by Borg 2004. On this view, even when a syntactic item requires a contextual contribution the relevant contextual features must themselves be formally tractable, and speaker intentions, it is argued, lie outside this formal domain (the argument for this claim is the ‘Frame Problem’, discussed by Fodor 2000: 37-8 and Borg 2004b). According to the former variety of minimalism, an interpretation counts as pragmatic just in case it contains elements not directly contributed by the syntactic constituents of the sentence. According to the latter, the difference between semantic and pragmatic features lies in the kind of comprehension processes underlying their recovery: semantic content can be recovered via computational operations alone, while pragmatic content relies on non-formal, abductive processes (capable of recovering any contextual feature, up to and including speaker intentions) for its recovery.
Given the rendering of the semantics/pragmatics distinction within my variety of minimalism, however, it seems that GCIs end up straddling the divide. They cannot constitute semantic content since they require information beyond the strictly semantic to recover (i.e. information about what speakers using the given terms commonly tend to convey). However, they do not seem to constitute full-blown pragmatic content, since they do not rely on access to the current speaker’s state of mind. A GCI can be recovered just by knowing that a speaker who says ‘some’ often conveys some but not all, but this information could be gathered as the result of exposure to past conversational exchanges, it would not require access to the current state of mind of the speaker. Thus recovery of a GCI, like the recovery of literal meaning on this picture, could be the result of a purely computation, syntax-driven reasoning process. This embeds the intuitive distinction Grice recognised between GCIs and PCIs, for it treats them as the results of different kinds of cognitive processing. This in turn might suggest the following kind of minimalist model for implicatures: there is assumed to be a modular language faculty, responsible for linguistic processing up to and including literal semantic content, and this content then feeds to two further systems – the holistic, general pragmatic system (responsible for recovering PCIs) and the more limited (and possibly modular in its own right) system for recovering GCIs which operates over statistical facts about what speakers have intended in past conversational exchanges. As noted, it seems such a model would meet desideratum (A), but what about (B) and (C)?
Turning to (B) first: the advocate of minimal semantics will have to bite the bullet with respect to the apparent semantic relevance of some implicatures, claiming that in all cases it is an appearance of truth-conditional relevance not a genuine case of rich pragmatic input to semantic content. Thus, given (3) and (4) (repeated here):
3) If Jill blew the whistle on poor practices at work and was sacked, then she is entitled to compensation.
4) If Jill was sacked and blew the whistle on poor practices at work, then she is entitled to compensation.
the minimalist must hold that there is no semantic difference between the two. However, while this may seem simply unacceptable, we should note that he current model can offer an explanation of why we hear a difference of meaning in such cases. For given the context-invariant aspects of GCI recovery it is unsurprising that we have the intuitions we do about cases like the above. GCIs are formally derived, they are mechanical and habitual, just as is the recovery of semantic content for the minimalist, thus it is no surprise that we can’t help but hear a serious difference in cases like these. The way our minds work on this model explains why we find it so easy to hear what is really a difference in speaker meaning as a difference in sentence meaning. Thus although the current account rejects (B) as a valid constraint on a theory of implicature, it can explain why it seems like a genuine condition. Furthermore, we might also note that this picture, whereby GCI recovery shares features with both the semantic and the fully pragmatic (PCI) side of the divide might help to explain why GCIs display the kind of habitual nature and context-independence stressed by Levinson, since they are indeed accessible without any knowledge of the aims and intentions of the particular speaker and they are generated by purely computational, deductive processes.
Finally, then, turning to (C): the account seems to fit well with the experimental data canvassed earlier. It can predict the time lag data of Noveck et al, since a GCI is indeed a further interpretation above and beyond the semantic content of the sentence. Furthermore, the account does not need to predict a general time delay in implicature recovery, for it may be that while further computational processing of an utterance is effortful, subjecting the literal content of the utterance to further non-computational processing is relatively effortless (to put things crudely, the thought would be that getting at what a speaker intends by her utterance comes naturally to us, or at least more naturally than does reflecting on what speakers tend to convey by specific terms). This minimalist account also explains the relative lack of GCI recovery in children, for it allows that though they have the cognitive resources for drawing both GCIs (which are computationally derived, as is semantic content) and PCIs (abductively derived), still they lack the necessary premises for (at least some cases of) GCI recovery, viz. information about the usual use of scalar terms in conversational exchanges. As exposure to normal language use increases so the drawing of implicatures based not directly on particular speaker intentions but on the characteristic use of words increases. It seems, then, that this minimalist theory of implicatures fares reasonably well with the desiderata we set out in §1.
After looking at three different neo-Gricean accounts of implicature, Borg opts for the paleo-Griceian, as she tries "to argue that it is the final version, drawn from minimal semantics, which gives the most satisfactory account of the phenomenon."
The Default Interpretation view is to be commended for taking seriously the difference between GCIs and PCIs and recognising the habitual and context-independent nature of the former. However, we saw that the approach ran into problems with its account of the apparent semantic relevance of GCIs and with the experimental data concerning time delays for the recovery of the pragmatic reading of scalar terms in adults.
Next we turned to relevance theory. On this account the distinction between GCIs and PCIs disappeared, with all implicatures being further interpretations of an utterance driven by relevance theoretic mechanisms. Whether or not this failure to meet desideratum (A) mattered turned on the account’s success with our other two conditions. However, despite initial appearances that RT coped very well with conditions (B) and (C), I suggested that there were underlying problems. First, questions were raised about RTs account of the putative semantic relevance of GCIs due to the retrospective nature of implicature accommodation at the semantic level. Secondly, questions were raised about how the theory fits with experimental evidence: it doesn’t seem to accommodate the time delay data from Noveck (since it doesn’t predict time delays for explicature recovery in general) nor does it seem to accommodate the variation in GCI recovery between adults and children (since it predicts both have the right mechanisms in place for such recovery and predicts both make successful use of those mechanisms in other settings).
Finally, I turned to minimal semantics, and suggested that on this account GCIs be treated as computationally-based, habitual inferences, which contrast with the recovery of PCIs (where PCI recovery requires abductive reasoning based on hypotheses about the speaker’s intentions). I suggested that this minimalist account not only preserves the intuitive divide between GCIs and PCIs, but it is also able to explain the experimental data (both concerning time delay effects in adults and the lack of GCI recovery in children). Furthermore, it offers at least some explanation of the apparent truth-conditional effects of GCIs. Minimalism holds that these effects are indeed apparent since, as Grice held, GCIs do not contribute to the recovery of the semantic content of a sentence (a process which the minimalist can hold takes place within a computational language faculty, which does not have access to such contextual information as the further propositions standardly communicated by speakers uttering ‘p’). However, the account also predicts that the impression that GCIs do contribute to semantic content is pretty much unavoidable given the nature of GCI derivation. For these reasons, then, I conclude that the third theory of implicature, drawn from minimal semantics, deserves further investigation.
Notes:
We should note, as Bach 2006 stresses, that Grice himself is apparently offering a rational reconstruction of the recovery of implicatures by ordinary speakers, not a theory of psychological processing. So empirical data concerning implicature recovery does not tell directly for or against his theory. Yet it seems clear that the accounts which seek to supplant Grice are concerned with providing psychologically real accounts of implicature recovery, thus empirical evidence is relevant.
Grice himself was somewhat doubtful about the former category, stating that ‘the nature of conventional implicature needs to be examined before any free use of it, for explanatory purposes, can be indulged in’ (1989: 46) and recently some theorists, such as Bach 1999, have argued that there are no such things as conventional implicatures. Although I don’t want to pursue this debate here, I do think there is enough doubt about the status of conventional implicatures to warrant our concentration on conversational implicatures.
Though we should note that Bach 2006 lists the labelling of scalar inferences as GCIs in the ‘top ten misconceptions’ about conversational implicature. For him, these are clear cases of impliciture. The experimental findings here have been disputed, for instance by Bezuidenhout and Cooper-Cutting 2002. However, as noted in Breheny et al 2006, questions may be raised about the opposing form of experiments undertaken by Bezuidenhout and Cooper-Cutting with respect to their relevance for scalar implicatures. While the status of recent empirical work is no doubt always open to question, I will assume in what follows that the findings of time-delays are significant. It should also be noted, as pointed out by an anonymous reader, that the experimental findings discussed here and below relate only to scalar implicatures and not GCIs in general. Hence if there was a reason, as Bach 2006 argues, to hold that scalars are not GCIs then it would be no objection to a theory of GCIs that it didn’t accommodate the time-delay data for scalar cases. However this move makes it imperative that scalars be shown not to be genuine GCIs, a claim which remains controversial and which, in fact, none of the theorists under discussion here (such as Levinson) embrace.
One move an advocate of this position might make would be to suggest that intuitions about validity run on grammatically derived content only, ignoring locally derived pragmatic content. However, any such move would seem to sit uncomfortably with the original motivation for treating GCIs as contributing to semantic content, which turned on intuitions about the truth-conditions of conditional statements. A second option would be to claim that the invalidity is explained by the fact that the meaning of ‘elephant’ entails ‘is a mammal’. However, this move would depend on a potentially questionable view of meanings as
definit ons and might not help in all cases (e.g. the invalidity of ‘some penguins can’t fly, therefore there is at least one penguin which can fly’).i
Another possible response here, raised by Daniel Watts, would be to allow that hearers derive both the logical and the pragmatic readings as explicatures and then wait for later conversational developments to tell them which one to cancel. Such a move, however, would run counter to RT (which holds that we stop processing once we have one interpretation that crosses the relevance threshold). It would also seem to impose quite serious cognitive burdens on agents, requiring them to hold multiple interpretations for multiple sentences for an indefinite length of time. Finally, it might interfere with a hearer’s ability to draw PCI’s from an utterance, since, on this account, they remain essentially ignorant of the explicature at first pass.
A final point to note with respect to this argument concerns how the Gricean fares with examples like (7). On this approach, since GCI’s cannot infect semantic content, the claim must be that the utterances are literal contradictions: to say that I will be happy if some friends come to the party, but not happy if they all come, is to contradict myself since all of them coming entails some of them coming. The explanation of why utterances like (7) seem acceptable then lies in the fact that hearers are concentrating on GCI content, not semantic content (this claim is fleshed out somewhat in §4 on minimalism). Clearly, however, this kind of explanation is not available to advocates of RT since explaining the apparent truth-conditional relevance of GCI’s in cases like (7) without positing actual truth-conditional relevance would undermine RT’s motivation for ever treating GCI’s as genuinely semantically relevant. Thanks to Jim Levine for stressing this po The idea would thus be that GCIs are a kind of abstraction from, or ossification of, PCIs: an agent starts by grasping a pragmatically enhanced reading of a scalar term as a PCI but later they come to grasp such readings simply through knowing how words are commonly used. It is because GCI understanding is taken, on this view, to emerge from prior PCI understanding that the mechanisms underpinning GCI recovery are held to lie outside the language faculty proper.int.
So does the account need to predict a general time-delay for all GCIs (a prediction which seems unlikely to be born out since, e.g., hearing ‘and’ as ‘and then’ seems unlikely to be marked by any significant time-delay)? I think the answer to this question is ‘no’, for there are at least two moves available on the current model to explain differences in time-delay data for GCIs. First, it could be argued that, in at least some cases, the ossification of PCIs into GCIs is more extreme than for scalars, so that the time delay reduces, in some cases yielding an insignificant degree of delay, or no delay at all. A prediction on this line of explanation would be that different subjects, with different degrees of familiarity with a langauge should exhibit different degrees of delay (with the time-delay reducing as GCIs become more firmly embedded for the subject). Secondly, it might be argued that in some cases of apparent GCI recovery a hearer can’t in fact access the right content without reference to the specific context and what the speaker in that context intends, thus these inferences are in fact PCIs and thus no time-delay is expected. However I’m grateful to Manuel Garcia-Carpintero, in conversation, and an anonymous reader, for pressing me on this point.
An additional piece of experimental evidence which might be thought relevant here comes from Storto & Tanenhaus’s 2004 observation that the exclusive interpretation of ‘or’ (often taken to be a GCI) is accessed locally. That is to say, hearer’s are able to utilise the information ‘one or the other but not both’ to help to instigate action on-line, at the point at which they hear the term ‘or’, rather than waiting until sentence comprehension is completed. However, although this information is available very early, their experiments showed that it is not available quite as early as genuine lexical information. Although they note that this feature might be a result of problems with their experimental design, if it were to prove robust for scalar implicatures in general it might provide further support for a view of GCI derivation as midway between semantic and full-blown pragmatic processing.
REFERENCES
Bach, K. (1999) ‘The myth of conventional implicature’. Linguistics and Philosophy,
22: 327-66.
Bach, K. (2006) ‘The top ten misconceptions about implicature’. In Drawing the Boundaries
of Meaning: Neo-Gricean studies in pragmatics and semantics in honor of Laurence R.
Horn, B. Birner and G. Ward (eds). Amsterdam: John Benjamins. 21-30.
Bezuidenhout, A. (2002) ‘Generalized conversational implicatures and default
pragmatic inferences’. In J.Campbell, M. O’Rourke, and D. Shier (eds) Meaning and Truth. Seven Bridges Press. 257-83.
Bezuidenhout, A. & Cutting, J.C. (2002). ‘Literal meaning, minimal
propositions and pragmatic processing’. Journal of Pragmatics, 34: 433-56.
Borg, E. (2004) Minimal Semantics. Oxford: Oxford University Press.
Borg, E. (2004b) ‘Formal semantics and intentional states’. Analysis, 64: 215-23
Borg, E. (2007) ‘Minimalism versus Contextualism in Semantics’. In Context-Sensitivity and
Semantic Minimalism: New Essays on Semantics and Pragmatics. G. Preyer and G. Peter (eds). Oxford: Oxford University Press. 546-571.
Borg, E. (forthcoming) ‘Meaning and Context: a survey of a contemporary debate’. In The Later Wittgenstein on Language, ed. D. Whiting. Palgrave.
Bott, L. & Noveck, I.A. (2004) ‘Some utterances are underinformative: the onset
and time course of scalar inferences’. Journal of Memory and Language, 51: 437-457.
Breheny, R., Katsos, N. & Williams, J. (2006) ‘Are generalised scalar
implicatures generated by default? An on-line investigation into the role of context in generating pragmatic inferences’. Cognition, 100: 434-463.
Cappelen, H. & E. Lepore (2005). Insensitive Semantics: A Defense of Semantic
Minimalism and Speech Act Pluralism. Oxford: Blackwell.
Carston, R. Thoughts and Utterances. Oxford: Blackwell.
-- Review of S. Levinson, Presumptive Meanings’. Journal of Linguistics, 40, 181-6.
-- Truth-conditional content and conversational implicature’. In C. Bianchi (ed.), The Semantics/Pragmatics Distinction. CSLI Publications: Stanford University. 65-100
Chierchia, G. (2004) ‘Scalar implicatures, polarity phenomena, and the syntax/pragmatics interface’. In A. Belletti (ed.), Structures and Beyond. Oxford University Press: Oxford. 39-103.
Davis, W. A. (1998) Implicature: Intention, Convention, and Principle in the Failure of Gricean Theory. Cambridge: Cambridge University Press.
Fodor, J. (1983) Modularity of Mind. Cambridge, MA: MIT Press.
Grice, H. P. 1941. Personal identity. Mind.
-- 1948. Meaning.
-- 1949. Disposition and intention.
-- 1961. The causal theory of perception.
-- 1966. Logic and conversation. The Oxford lectures.
-- 1967. Logic and conversation, repr. in a revised form in (1989) Studies in the Way of Words. Harvard University Press, Cambridge, MA.
-- 1991. The conception of value.
-- 2001. Aspects of reason.
Horn, L. (2006) ‘The border wars: a neo-Gricean perspective. In K.
Turner & K. von Heusinger (eds), Where Semantics Meets Pragmatics. Elsevier. 21-48.
--- "A brief history of negation".
Langdon, R., M. Davies, and M. Coltheart (2002) ‘Understanding minds and communicated meanings in schizophrenics’. Mind and Language, 17: 68-104.
Levinson, S. (2000) Presumptive Meanings: The Theory of Generalized Conversational
Implicature. MIT Press, Cambridge, MA.
Noveck, I. A. (2001) ‘When children are more logical than adults. Investigations of
scalar implicature’. Cognition 78: 165-188.
Noveck, I. A. (2004) ‘Pragmatic inferences related to logical terms’. In I.A. Noveck &
D. Sperber (eds) Experimental Pragmatics. Basingstoke: Palgrave Macmillan. 301-
321.
Papafragou, A. & Musolino, J. (2003) ‘Scalar Implicatures: experiments at the
semantics-pragmatics interface’. Cognition 86: 253–82.
Recanati, F. (1993) Direct Reference: From Language to Thought. Oxford: Blackwell.
Rips, L.J. (1975) ‘Quantification and semantic memory’. Cognitive Psychology 7: 307-40.
Sauerland, U. (2004) ‘Scalar implicatures in complex sentences’. Linguistics and
Philosophy, 27: 367-91.
Sperber, D. & Wilson, D. (1986) Relevance: Communication and Cognition. Oxford:
Blackwell.
Stanley, J. (2005) ‘Semantics in context’. In G. Preyer & G. Peter (eds). Contextualism
in Philosophy: Knowledge, Meaning, and Truth. Oxford: Oxford University Press. 221-54
Stanley, J., and Z. Szabo (2000) ‘On quantifier domain restriction’. Mind and
Language 15: 219-261.
Storto, G. & Tanenhaus, M. (2004) ‘Are scalar implicatures computed online?’
Proceedings of WECOL 2004.
No comments:
Post a Comment