The Grice Club

Welcome

The Grice Club

The club for all those whose members have no (other) club.

Is Grice the greatest philosopher that ever lived?

Search This Blog

Friday, May 6, 2011

Matthew Stone on Griceian rationality

by J. L. Speranza
for the Grice Club

"An essential ingredient," Matthew Stone argues, "of language use is our ability to reason about utterances as intentional actions.

Linguistic representations are the natural substrate for such reasoning, and models from computational semantics can often be seen as providing an infrastructure to carry out such inferences from rich and accurate grammatical descriptions.

Exploring such inferences offers a productive pragmatic perspective on problems of
interpretation, and promises to leverage semantic representations in more
flexible and more general tools that compute with meaning.

In the philosophy of language, theorists have often attributed much of the flexibility with which we use language not to the complexity of our linguistic system, but rather to our robust abilities to reason about other agents' actions, choices and motivations.

This insight, as we don't have to remind club members, is originally due to Grice, an Oxford philosopher [19, 20].

Stone follows the practice of referring to such reasoning as Griceian ("I prefer Griceian, in that it flows better in the mouth").

The example below illustrates Griceian reasoning.

Each has a natural interpretation that departs from the literal meaning
that we would expect our grammar to assign it.

(1) a. Can you pass the salt?
--- b. You may now go.
--- c. Chris is a tiger.

(Stone acknowledges Rich Thomason and Bonnie Webber, and with David DeVault, Paul Tepper and Jesse Fischer at Rutgers).

"Can you pass the salt?"

This is a yes-no question, but can function as a request.

"Yoy may now go". This is a statement of fact, but can function to give the permission it describes.

"Chris is a tiger". This locates an animal in the natural order, but can function to characterize a human personality.

These apparently unexpected functions may actually follow from our abilities
to understand one another by reasoning cooperatively.

For example, Searle explains utterances like (1a) by recognizing that
observers can, in clear cases, anticipate an agent's planned future actions from
an agent's present behavior.

In particular, Searle suggests that "Can you pass the salt?" functions as
a request because it so clearly triggers the mutual expectation that the speaker
plans a request eventually.

Similarly, Lewis explains utterances like "You may now go" by
observing that collaborative social agents can take up and accomplish the goals
that they recognize behind other agent's plans, even when the plans they recognize
are themselves defective.

Lewis suggests that what is permitted is defined by mutual understanding, within the context of agents' social relationships, and from this background argues that "You may go now" functions to give permission because the utterance so clearly communicates the goal of registering the speaker's newly defined permission on the common ground.

Finally, Grice explains utterances like "You are the cream in my coffee" by observing that agents can rely on mutual assumptions of rationality to coordinate not just on literal interpretations of their utterances, but also on salient reinterpretations.

At least with suitable models of interpretation

In particular, with the word tiger in (1c), speaker and audience
can coordinate to evoke a concept of aggression that is saliently associated with
tigers' behavior.

The Gricean reasoning we see in cases such as these reminds
us that conventionalized literal meanings cannot always do justice to our intentions.
In using language, we are improvising in an open-ended world, and we
must act creatively if we are to meet our goals for communication, or balance
them against the other signi cant goals we have for our interactions, including
goals for self-presentation, a liation and autonomy.

Utterances like "Can you pass the salt?", "You may go now" and "John is a tiger" are not the sort that we typically aim to account for in principled ways in computational systems|we nd enough in much more
straightforward examples to occupy ourselves fully.

Nevertheless, Stone believes that
reconciling formal models of agency with formal models of meaning, in the ways
utterances like those three above appear to require, is one of the most vital challenges for the cognitive science of language in general, and for computational semantics in particular.

It's not just because our systems will someday have to exhibit the flexibility of language and action we see in those three utterances above, if we are correct that this flexibility is part and parcel of successful communication in the real world.

More immediately, it's because general models of agency provide substantive
high-level constraints we can draw on to flesh out semantic theories in more
detail and to implement such ideas in extensible and elegant architectures.

Few other frameworks provide such a clear guide to what to do next, and how to do
it. Stone's goal is to justify this contention.

Stone begins by exploring the close relationship between dynamic
approaches to semantic interpretation and philosophical accounts of
interpretation as intention recognition such as those introduced
in connection with those three utterances above.

It seems a short and natural step to move from a description of meaning in terms of change, as in dynamic semantics, to the description of meaning in terms of actions that cause change that the pragmatic theories require.

But to take that step, we need the con dence to scrap those parts of the received wisdom that no longer t our present motivations.

Taking the step opens up a number of possibilities for applying ideas from computational semantics in new ways|I conclude by discussing two speci c cases in
which we are required to infer a speaker's assumptions about the context from
their utterances.

Stone goes on to describe a computational model of the interpretation
of vague utterances. And he describes a reversible model of
referring-expression generation.

Both models combine dynamic semantic representations
with genuinely pragmatic processes of intention-recognition.

Overall, Stone explores the semantic implications of work that he has presented to rather di erent audiences, and sketches ongoing
investigations that he and my colleagues aim to present more fully in the future.

The basic perspective of Gricean pragmatics is to analyze utterances as actions.

This analysis frames knowledge of meaning in terms of the potential communicative
e ects of utterances, and locates a speaker's meaning in the choice of a
particular utterance for an assortment of its e ects.

Adopting this perspective
in systems that apply sophisticated knowledge of linguistic semantics should
make computing meaning more robust and more useful.

For example, on this
perspective, understanding is a problem of recognizing the speaker's rationale
for using an utterance in context.

Such a rationale will link the utterance to the
agreed context and purposes of the conversation, as understood by the speaker.

Accordingly, it will provide a broad gauge of what makes sense in context, which
the system can draw on in processes such as ambiguity resolution.

Planning a
response, meanwhile, means acting cooperatively to meet the requirements and
objectives of the ongoing conversation.

Models of cooperation provide flexible
guides to action not just when a collaboration proceeds normally, but also
in cases of unexpected outcomes, misunderstandings and disagreements.

Finally,
in generation, this perspective calls for representing the rationale behind
a system's own actions.

This can help a system keep its utterances concise
by anticipating whether its utterances will be understood as intended, and can
provide the system with the wherewithal to follow its utterances up through
further interaction with the user.

This Gricean model represents a widespread view, particularly within the
American AI tradition.

The work of Allen and colleagues provides a
prominent example of the construction of conversational systems through general
models of agency.

And there are many other examples (too many to survey
in this short space).

In the past, systems in this tradition have tended to
abstract intentionality away from linguistic representations and processes.

For
example, they have tended to encapsulate any Gricean reasoning in autonomous
modules which presuppose fully disambiguated logical forms as inputs or outputs.

This has left little opportunity to leverage computational semantics for
conversational agency (or conversely, to apply agency in computing meaning).

At present, however, this state of a airs is starting to change.

Stone sees two reasons for this.

First, computational semantics increasingly o ers
ready abstractions with which to precisely characterize the intended e ects of
an utterance.

Take the utterance below, after Donellan, cited by Grice in "Vacuous Names", which we imagine as the utterance of a speaker A on some particular occasion.

"The man with the champagne is happy."

One aspect of the speaker's meaning here is to use the de nite description the
man with the champagne anaphorically, to evoke some discourse referent m from the context.

Another aspect of meaning is to contribute
the information that m is happy to the conversational record, an evolv-
ing abstract model of the agreed content of the conversation.

Such abstractions are enough to explain why A would choose (2) and commit
to utter it.

In this sense, they determine A's intention.

The content of this
intention can be spelled out as follows.

A utters "The man sipping the martini is happy".

In the context, some referent
m is the man with the champagne.

Thus, as a result of uttering (2) in the context,
it becomes part of the conversational record that m is happy.

Stone refers to this
content as i1.

Stone goes on to understand i1 as a pragmatic interpretation, a privileged
symbolic structure through which we manage our e orts to understand, be
understood, and play our part in the conversation.

Such abstractions really are a close t to the theory of intention, as independently
developed by Bratman, Pollack, and others.

In this theory,
any plan or intention is a complex mental representation summarizing an agent's
reason to act.

It lays out a course of related actions, identi es some key circumstances that now hold in the world, and shows why the agent might expect the planned actions to lead to a desirable outcome in the current circumstances.

Thus, when we recognize someone's intention, we know what they think they are
doing, why they are doing it, and how they must think it will work.

In applying
this model to utterances, we require intentions that associate a concrete linguistic
structure such as (2) (the course of action) with inferences that show how,
in the current discourse context (the key circumstances), the meaning of that
structure brings speci c information to the conversation (a desired outcome).

This is indeed how we characterized (2) with i1 above.

A second reason Gricean inference can increasingly interact with computational
semantics is our general experience with complex symbolic representations
that spell out links between an utterance and the context and goals of
language use.

These representations include highly-structured objects such as
feature structures and proof-terms.

Our experience with such representations paves the way to recast formalisms for interpretation in explicitly pragmatic terms.

Recall that plans and intentions don't just describe changes
to the context, they explain how those changes come about.

Plans and intentions
such as i1 must be represented as arguments, inferences that track the
causal connections at play in an utterance, and use those causal connections to
predict the e ects those utterances might hypothetically achieve.

Consider our example (2) and its pragmatic interpretation i1.

(3) reports i1
as an axiomatic deduction in a modal logic of knowledge and time, using [cr]p
meaning that p follows from the conversational record and using [n]p meaning
that p holds after the current action takes place.

(3)

a.

[cr](man(m) ^ with(m; d) ^ champagne(d))

Assumption about context

b.

[cr](man(M) ^ with(M;D) ^ champagne(D))

Autters(2)

[n][cr]happy(M)

Dynamic semantics

c. Autters(2)

Hypothesized course of action

d. [n][cr]happy(m)

Desired e ect, by modus ponens and instantiation from (3a{3c).

We could also record such inferences as terms in a suitable type theory, as in [33],
or even as the trace of a suitable logic-programming interpreter, as in [24].

One
advantage of the inferential form of (3) is thus that it supports the treatment
of local pragmatics from earlier inferential accounts.

For example, the inference
from (3a) to the antecedent of (3b) leaves room for bridging in the resolution of
presuppositions. And the inference from the consequent of (3b) to (3d) leaves
room in the interpretation for certain kinds of relevance-guided implicature.

Pragmatic interpretations such as (3) are linguistic representations that support
Gricean inference.

They not only describe an utterance in linguistic terms
but map out a reason to use the utterance.

They can gure not just in understanding
but in general models of deliberation and cooperation.

As we develop
such structures, we are no longer just applying theoretical ideas from AI, linguistics, and philosophy.

We are synthesizing them into something new.

For example, pragmatic interpretations mark a departure from the tradition
of formal speci cation of agents within the knowledge representation community.

This tradition originates in Levesque's view of KR as a tool whose main use is
to characterize, validate and interact with agents [29], and is epitomized by
the intricate attitude-de nitions for cooperative agents that occupy Cohen and
Levesque, and even to some extent the related work of Pollack and
Grosz and colleagues.

Of course, the attitudes of agents in using
utterances should conform to normative speci cations of cooperation, but for
us that is secondary.

Fundamentally, pragmatic interpretations formalize the
content of intentions as data structures for agent implementations.

Pragmatic interpretations also steer clear of traditional oppositions from linguistics, such as that between unstructured representationalist views of interpretation, as in DRT, and the more information-theoretic approach of dynamic
semantics.

Consider the treatment of presupposition in particular.

Presupposition
gures in (3) in the content of axiom (3b), which describes the context
change potential of the utterance.

According to (3b), the contribution of the
utterance is contingent on nding a man M and a drink of champagne D as
part of the conversational record.

The inference from (3a) derives the speci c
instance m for M and d for D.

This link between the utterance and its context
is part of the interpretation and has to be recognized to recover the speaker's
intention.

This is very much an anaphoric and representationalist treatment of
presupposition.

However, there is not, and cannot be, in structures such as (3) the kind of
surgical process of accommodation that van der Sandt has proposed.

Presuppositions
don't disappear from pragmatic interpretations once they are resolved.

On the contrary, the resolution is itself recorded as part of the inference.

Moreover, in any proof, you can use a rule only by deriving its antecedent from
the available assumptions|a proof records the logical consequences of assumptions
about the world.

An interpretation in particular must spell out su cient
assumptions about the context to derive the presuppositions required by any
causal axioms, such as (3b), that gure in it.

This is in keeping with Beaver's
observation that the interpretation of presupposition always involves a \toplevel"
assumption about the conversation [4].

Indeed, this argument shows that
a representationalist view of presupposition may rule out local accommodation
for principled reasons, just like Stalnaker's information-based view of pragmatic
presupposition does [39].

Pragmatic interpretations even mark a departure from Grice's characterization
of communication!

Grice was particularly concerned with the process of communication, and observed that understanding a meaningful utterance such as (2) depends on recognizing the intention behind it.

So with (2) for example,
a hearer B will recognize i1.

From this, B knows the speaker intends the
conversation to evolve in a certain way.

In accepting and grounding A's contribution
[9, 7], B cooperatively sees to it that these e ects are taken up.

In the
normal case, A anticipates and indeed intends this uptake. In other words,
the speaker A asserts (2) with the intention that B will recognize i1, adopt
it spontaneously and cooperatively, and make it clear that B is doing so.

Let's
call this intention i2. It represents an expectation about conversational
dynamics, an uncertain conclusion about the unfolding of a collaborative exchange
within a group of language users, which allows a speaker to select a good
utterance in context.

Both the linguistic reasoning encapsulated in pragmatic interpretations, such
as i1, and the psychological reasoning encapsulated in expectations about conversational dynamics, such as i2, are essential to the process of meaningful
communication that is characteristic of language use.

Grice crystallizes this in
his famous analysis of non-natural meaning.

Recasting this analysis in present
terms, we might o er (4).

(4)

To mean p is to deliberately attempt to use conversational dynamics
to contribute p to the conversational record,
by manifesting a pragmatic
interpretation which represents an utterance as contributing p to the
conversational record.

This doesn't look much like Grice's de nition.

For Grice, a communicative intention spells out its own causal role in directing the hearer's cognitive processes.

Grice assumed that i1 = i2 and understood this intention as self-referential.

These complex speaker meanings must inherently conflate linguistic conventions
with psychological generalizations.

If that's right, research on interpretation
that sidesteps psychology, as undertaken by computational semanticists
for example, can only be regarded as short-circuiting Gricean reasoning, not
interfacing with it.

Grice's suggestion has a geeky
appeal, but it is not inevitable.

Grice's position in fact was formulated with
the anticipation that it would help philosophers reduce speakers' grammatical
knowledge of linguistic meaning to speaker's general knowledge of human
psychology -- a tendentious outgrowth of other aspects of Grice's philosophy of
language.

Thomason has argued that the key to the Gricean account is just
that the speaker's overriding intention plays out through a suitable process of
intention-recognition.

Thomason introduces an account of meaning which allows
for a distinction between the overriding intention, such as i2, and a recognized
intention, such as i1.

The dual status of utterances as intentional actions allows us to acknowledge
the richness of the processes that unfold in real conversation and to articulate a
principled role in these processes for pragmatic interpretations (and the semantic
representations they contain).

The characterization of these processes is now
central to formal models of dialogue; for example, Ginzburg and Cooper [17,
13] keep track of the contextual links of an utterance as well as its intended
contribution, and de ne dialogue transitions that update the conversation in
di erent ways by drawing flexibly on these representations.

In dialogue, hearers
sometimes take up the new presuppositions they nd a speaker making.

But
other times they reject them, or ask about them|or simply delight with the
speaker in their absurdity.

These four scenarios, for example, can all start from
B's successful recognition of the same representation (3) as manifested by A in
uttering (2).

It really is mutual knowledge that m is the man with the champagne.

B
knows that A is sincere and cooperative and so intends the conversation to
evolve as mapped out in (3).

In accepting A's contribution, and in grounding
it, B cooperatively sees to it that the e ects envisaged in (3) are taken up, and
thus that the conversational record does come to provide that m is happy.

B does not not already know what kind of drink m has.

But from recognizing
(3), B knows that A presumes m has champagne.

Judging A sincere and
cooperative, B takes up the assumption that m has champagne, and goes on to
accept A's contribution.

A and B proceed with a conversational record where
m has champagne and m is happy.

B knows that m does not have champagne; m has water.

From recognizing (3), however, B knows A presumes otherwise. Judging A sincere and cooperative, B concludes that A is in error, and follows up with a correction.

It's not a champagne, it's water.

From this point, the interlocutors still have to reach agreement on what drink m has, and what m's emotional state is.

B looks to m, sees a sour face, and laughs.

That is, since m is obviously not happy, A could not have been sincere in o ering this interpretation.

(3) represents a pretense, and may not contribute explicit information to the conversational record.

But perhaps it has its other conversational e ects.

It might, for example, cement A and B's relationship, by reinforcing some implicit understanding they have (e.g., the party is too far gone to be saved).

Pragmatic interpretations embody an appealing theory of language use
with close ties to our existing practice.

But what Stone likes most about them is their ability to inject syntactic and semantic knowledge into pragmatic processes
based on Gricean reasoning.

Stone sketches two examples of this, both
of which focus on reconstructing a speaker's assumptions in using an utterance.

Utterances using vague words can achieve speci c e ects in context.

Imagine
describing an arrangement of three two-cm squares (call them s1, s2 and s3)
and one four-cm square (call it s4) as in (5).

"The large square is nice."

With this utterance, the speaker can be understood to refer to s4.

How might the intention behind (5) be recognized in its context,
even if we assume that vague adjectives presuppose a standard of comparison,
and the context here does not inherently supply one?

What's more, even though
(5) is associated with a flawed communicative intention in this context, simply
by being recognized, (5) can achieve all the e ects we would normally associate
with it; and it can, in addition, update the context to include a standard
of comparison for large squares|by accommodation.

This explanation shows
how we can capture the pragmatic perspective on vagueness common to many
recent proposals within general computational models of utterance
understanding.

Stone begins by specifying the presupposition of (5), as given in (6).

(6)

square(X) ^ size(X; S) ^ large-std(hD;1i) ^ in(S; hD;1i)

In words, X is a square, the size of X is S, the interval hD;1i (lower bounded
by some scalar value D and without an upper bound) provides the standard for
large size in the context, and S lies within this interval.

The presupposition arises as part of a communicative intention such as that
represented in (7).

(7)

a.

[cr](square(s4) ^ size(s4; 4cm) ^ large-std(hd;1i) ^ 2cm < d < 4cm)

Assumption about context

b. [cr](square(X) ^ size(X; S) ^ large-std(hD;1i) ^ in(S; hD;1i)

Autters(5)

[n][cr]nice(X)

Dynamic semantics

c. Autters(5)

Hypothesized course of action

d. [n][cr]nice(s4)

Desired e ect, by logic from (7a{7c).

In particular, the presupposition originates in the antecedent of the dynamic
semantic clause (7b).

As before, the intention derives a speci c instance of this
presupposition by inference from a hypothesis (7a) about the conversational
record.

But now in this case, the standard is represented as an arbitrary or
underspeci ed term d that must lie somewhere between the size of squares s1,
s2 and s3 and the size of square s4.

The vagueness of the interpretation consists
in the underspeci cation; the speaker is not committed to a speci c value for d
and the hearer's cannot identify one.

Let's consider the hearer's inference in recognizing the plan in (7).

The
hearer is tracking the speaker's deliberation, and knows:

(8)

a.

The speaker is acting as though a certain context obtains.

This pretend
context is di erent from the actual context only in certain potentially
predictable ways.

In particular, the pretend context may supply standards
for vague predicates that the actual context does not.

b. The pretend context supplies some intended instance of (6).

c. In this pretend context, this instance can be recognized as intended.

By (8a) and (8b), the hearer can infer that X is one of the four squares in the
context, and that S is the size of X. By (8a) and (8b), the hearer can also infer
that the pretend context speci es an interval of size that includes the size S
of X.

There now remain two qualitatively di erent standards (less than 2 cm;
or between 2 cm and 4 cm). But (8c) eliminates the smaller standard, since it
provides no way to recognize which of the four squares is X: the presupposition
of the plan can be satis ed in the pretend context with any of the four possible
squares.

On the other hand, (8c) con rms the larger standard, since the presupposition now has only the resolution where X is square s4.

This inference combines abductive reasoning to reconstruct the speaker's context [24, 44] with constraint satisfaction to resolve references.

Once the hearer recognizes the plan, the hearer is free to respond to it in
any reasonable way.

The most cooperative strategy would be to update the
representation of the actual context, to provide the standard of size the speaker
appealed to, in a step of accommodation, and then to respond naturally to the
utterance in the revised context.

For example, if the hearer would have handed
the large square to the speaker at this juncture if both had already agreed
that this was the large square, the hearer could also hand the large square
to the speaker here.

Note, however, that this kind of cooperative strategy
probably diverges from actual practice in face-to-face conversation, where more
conservative and interactive coordination would be expected.

The discussion so far has centered on reasoning processes that a dialogue system
might use in interacting with a user.

But perhaps the most flexible tools we need
for reasoning about meaning are those that will make it easier to develop and
extend such systems in the rst place|tools that help us construct semantic
resources, for example.

Such problems may also bene t from the application of
Gricean reasoning to linguistic representations.

Consider generation of referring expressions (GRE), a well-studied and important
subtask of NLG (see, e.g., [14]).

The input to GRE is an entity, as
characterized by a database of information about it that is presumed to be
part of the common ground.

The output is the semantics and perhaps also the
syntax of a linguistic description that will allow the hearer to distinguish the
entity uniquely from its salient alternatives.

To build a GRE module requires
identifying the context set of individuals that are explicit or implicit in application discourse, formalizing the relevant properties of all these individuals, and specifying how to compose these properties into linguistic descriptions. The
GRE module thus epitomizes the resource-intensive character of many natural
language processing tasks.

The simplest way to build a GRE module would be to supply it with examples
of desired behavior, pairing entities with descriptions of them that would
be satisfactory for a system to use in context. Automatic methods would then
construct a suitable context set, and knowledge base for a satisfactory GRE
module for the system. These NLG resources would account for the sample descriptions
the designer has supplied, but could also generalize to other possible
forms of referring expressions and to other possible contexts.

This sets up a problem of Gricean reasoning.

The problem is to frame GRE
tasks for the system in such a way that it can make its choices in a predictable
way to match the speci ed examples. A solution involves reconstructing the
choices a system has in GRE and reconstructing the reasons it must have to
make those choices one way or another. Consider an example: we want the
system to describe d1 as the furry black dog. Then we must at least have the
information in (9).

(9)

a.

dog(d1)

b.

black (d1)

c.

furry(d1)

This information is required to support the system's choices.

But that can't be all the conversational record contains.

Otherwise we would expect it or the dog.

The generator must have alternatives to d1 in mind.

For example, maybe there is another dog d2 that is furry but not black, and a third dog d3 that is black but not furry, as in (10).

(10)

a.

dog(d2) dog(d3)

b.

black (d3)

c.

furry(d2)

((10) assumes a convention of negation-by-failure.

We specify that d2 is not big
and that d3 is not black by simply omitting the relevant formulas.

Now the
generator has a motivation for every element in its description of d1.

Without
black, the referent might be d1 or d2.

Without furry, the referent might be d1
or d3

And without dog, the generator wouldn't have an English noun phrase
expressing the right properties.

By supporting and motivating the generator's
choices, we can ensure that the generator realistically should and would be able
to use the big black dog to identify d1 in this context.

To pursue this idea in a general way, we can use a model of linguistic choice
after [43], where a grammatical derivation adds lexical elements one-by-one.
We reconstruct a rationale for such choices. Each of these choices must be
supported.

And each must be motivated in the context of the generator's other
commitments, and the goals of reference; it must ful ll a syntactic function that
is required in a complete derivation, or else it must rule out some distractor.

Applying Gricean inference to such system-building problems allows us to
draw on flexible, general modules. For example, the general approach makes
predictions about how the same individuals could be described in new contexts,
perhaps using more concise, context-sensitive descriptions. Krahmer and Theune
[27] investigate the inherent context-sensitivity of general approaches to
GRE; see also [14]. The general approach also o ers an inexpensive diagnostic
that speci ed linguistic behavior portrays a world of individuals in a way that
is consistent and that interlocutors can be expected to recognize. This check
would fail for some sets of examples. For instance, a speci cation that said d1
could be described as the black dog and d2 could be described as the dog could
not be supported and motivated. In this speci cation the dog must be ambiguous.
Failure in such cases flags a genuine defect in the speci cation for NLG,
for which the appropriate response is to revise the NLG examples [36].

An essential ingredient of language use is our ability to reason about utterances
as intentional actions. Linguistic representations are the natural substrate for
such reasoning. Intentions are resources for deliberation; they therefore abstract
away from considerations that don't bear on planning and choice.

In
the case of communication, abstracting away from the cooperative processes of
conversation distills a communicative intention that records the grammatical
description of the utterance, its links with the context, and its contributions to
the conversational record.

From this perspective, models from computational semantics can often be
seen as providing an infrastructure to carry out such inferences from rich and
accurate grammatical descriptions. Exploring such inferences o ers a productive
pragmatic perspective on problems of interpretation, and promises to leverage
semantic representations as part of more flexible and more general tools that
compute with meaning.

REFERENCES

ALLEN J, G. Ferguson, and A. Stent.

An architecture for more realistic conversational systems.

In Intelligent User Interfaces, 2001.

ASHER N. and A. Lascarides.

Logics of Conversation. Cambridge, 2003.

BARKER C.

The dynamics of vagueness. Linguistics and Philosophy, 25(1):1{
36, 2002.

BEAVER D.

Presupposition and Assertion in Dynamic Semantics. CSLI,
2001.

Blaylock, N. -- J. Allen, and G. Ferguson. Managing communicative intentions
with collaborative problem solving. In R. Smith and J. van Kuppevelt,
editors, Current and New Directions in Dialogue. Kluwer, 2002.

M. E. Bratman. Intention, Plans, and Practical Reason. Harvard, 1987.

S. E. Brennan. Seeking and Providing Evidence for Mutual Understanding.
PhD thesis, Stanford University, 1990.

P. Brown and S. C. Levinson. Politeness: Some Universals in Language
Use. Cambridge, 1987.

H. H. Clark and E. F. Schaefer. Contributing to discourse. Cognitive
Science, 13:259{294, 1989.

P. R. Cohen and H. J. Levesque. Intention is choice with commitment.
Arti cial Intelligence, 42:213{261, 1990.

P. R. Cohen and H. J. Levesque. Rational interaction as the basis for
communication. In P. R. Cohen, J. Morgan, and M. E. Pollack, editors,
Intentions in Communication, pages 221{256. MIT, 1990.

P. R. Cohen and H. J. Levesque. Teamwork. Nous, 24(4):487{512, 1991.

R. Cooper and J. Ginzburg. Using dependent record types in clari cation
ellipsis. In J. Bos, M. E. Foster, and C. Matheson, editors, EDILOG 2002,
pages 45{52, 2002.

R. Dale and E. Reiter. Computational interpretations of the Gricean maxims
in the generation of referring expressions. Cognitive Science, 18:233{
263, 1995.

K. Donellan. Reference and de nite description. Philosophical Review,
75:281{304, 1966.

G. Ferguson and J. F. Allen. TRIPS: An intelligent integrated problemsolving
assistant. In AAAI, pages 567{573, 1998.
12

J. Ginzburg and R. Cooper. Clari cation, ellipsis and the nature of contextual
updates in dialogue. King's College London and G¨oteborg University,
2001.

D. Gra . Shifting sands: An interest-relative theory of vagueness. Philo-
sophical Topics, 28(1):45{81, 2000.

GRICE, H. P. 1939. Privation and meaning
-- 1941. Personal identity. Mind.
-- 1948. Meaning. Philosophical Review, 66(3):377{388, 1957.
-- 1967. Utterer's meaning and intention. Philosophical Review, 78(2):147{177, 1969.
-- 1967. Logic and conversation. In P. Cole and J. Morgan, editors,
Syntax and Semantics III: Speech Acts, pages 41{58. Academic Press, 1975.

B. Grosz and S. Kraus. Collaborative plans for complex group action.
Arti cial Intelligence, 86(2):269{357, 1996.

B. J. Grosz and C. L. Sidner. Plans for discourse. In P. R. Cohen, J.Morgan,
and M. E. Pollack, editors, Intentions in Communication, pages 417{444.
MIT, 1990.

J. Hobbs, M. Stickel, D. Appelt, and P. Martin. Interpretation as abduction.
Arti cial Intelligence, 63:69{142, 1993.

H. Kamp and U. Reyle. From Discourse to Logic. Kluwer, 1993.

H. Kamp and A. Rossdeutscher. DRS-construction and lexically driven
inference. Theoretical Linguistics, 20:97{164, 1994.

E. Krahmer and M. Theune. E cient context-sensitive generation of referring
expressions. In K. van Deemter and R. Kibble, editors, Information
Sharing: Reference and Presupposition in Language Generation and Inter-
pretation, pages 223{265. CSLI, 2002.

A. Kyburg and M. Morreau. Fitting words: Vague words in context. Lin-
guistics and Philosophy, 23(6):577{597, 2000.

Hector J. Levesque. Foundations of a functional approach to knowledge
representation. Arti cial Intelligence, 23(2):155{212, 1984.

D. Lewis. Scorekeeping in a language game. In Semantics from Di erent
Points of View, pages 172{187. Springer, 1979.

S. Neale. Descriptions. MIT, 1990.

G. Nunberg. The non-uniqueness of semantic solutions: Polysemy. Lin-
guistics and Philosophy, 3(2):143{184, 1979.

Piwek, P.

Logic, Information and Conversation.

PhD thesis, Eindhoven University of Technology.

M. E. Pollack. Plans as complex mental attitudes. In P. R. Cohen, J. Morgan,
and M. E. Pollack, editors, Intentions in Communication, pages 77{
103. MIT, 1990.

M. E. Pollack. The uses of plans. Arti cial Intelligence, 57:43{68, 1992.

E. Reiter and S. Sripada. Should corpora texts be gold standards for NLG?
In INLG, pages 97{104, 2002.

J. R. Searle. Indirect speech acts. In P. Cole and J. Morgan, editors, Syntax
and Semantics III: Speech Acts, pages 59{82. Academic Press, 1975.

D. Sperber and D. Wilson. Relevance: Communication and Cognition.
Harvard, 1986.

R. Stalnaker. Presuppositions. Journal of Philosophical Logic, 2(4):447{
457, 1973.

M. Stokhof and J. Groenendijk. Dynamic semantics. In R. Wilson and
F. Keil, editors, MIT Encyclopedia of Cognitive Science. MIT, 1999.

STONE, M.

Representing communicative intentions in collaborative conversational
agents.

In B. Bell and E. Santos, editors, AAAI Fall Symposium
on Intent Inference for Collaborative Tasks, 2001.

--. Communicative intentions and conversational processes in
human-human and human-computer dialogue.

In J. Trueswell and
M. Tanenhaus, editors, World-Situated Language Use. MIT, 2003.

-- & C. Doran.

Sentence planning as description using treeadjoining
grammar.

In ACL, pages 198{205, 1997.

-- & R. H. Thomason.

Context in abductive interpretation.

In
J. Bos, M. E. Foster, and C. Matheson, editors, EDILOG 2002, pages 169{
176, 2002.

R. H. Thomason. Accommodation, meaning and implicature. In P. R. Cohen,
J. Morgan, and M. E. Pollack, editors, Intentions in Communication,
pages 325{363. MIT, 1990.

R. van der Sandt. Presupposition projection as anaphora resolution. Jour-
nal of Semantics, 9(2):333{377, 1992.

No comments:

Post a Comment