The Grice Club

Welcome

The Grice Club

The club for all those whose members have no (other) club.

Is Grice the greatest philosopher that ever lived?

Search This Blog

Wednesday, January 9, 2013

A Discourse By Grice

Speranza
 
Commentary on M. Mazzer,
"The Text as a Context:
Blurring the Boundaries between Sentence and Discourse"

Grice loved a motto.

He would say that one of the philosophical dicta that influenced him most he heard from Cook Wilson:

What we know we know.

This is a sentence -- and a discourse.

"I am; therefore I think" is false

but

"I think; therefore, I am" -- is possible a sentence AND a discourse. (Grice was careful with the use of 'therefore').

In general, Grice thought, oddly, that 'sentence', unlike 'discourse', was a value-oriented notion:

"The if not well Peter wasn't the the"

is NOT a sentence. The idea of a sentence is already value-oriented. Hence his opposition, in "Vacuous Names" (where he presents his System G-HP, a highly powerful version of System G) to otiose terms like "well-formed formula" -- and such.

Conversation, and discourse, Grice used "informally". In his Oxford Lectures on Logic and Conversation, that predated the William James ones, he is strict that he will deal with what he calls couplets, such as

A: What time is it?
B: Noon.

This is the origin of the idea of a conversational implicature, a refinement of Nowell-Smith's mere "contextual implication". Since then, Grice became obsessed, in a good way, with "CONVERSATION" as the unit for philosophical analysis.

A central and influential idea among researchers of language is that the
sentence, by virtue of its direct relationship with syntactic parsing,
represents the heart of language itself. Even in the field of pragmatics,
models rooted in classical theories tend to put sentence prominence
forward again.

Here, we present results from recordings of eventrelated
brain potentials that brings into question even the distinction
between sentence and discourse.

During natural communicative
exchanges, the human brain continuously and immediately relates
incoming words to the previous discourse, whether it is constituted of a
word, a sentence or complex speech. Moreover, focusing on discourse
instead of sentence represents a viable strategy to better understand the
relationship between language and other cognitive systems.
Keywords: Discourse; text; context; experimental pragmatics; ERPs
recordings.

Our theme is the pragmatics of discourse.

The aim is to highlight
the impact of discourse-level factors on language processing in order to
demonstrate that the classic separation between sentence and discourse may be
misleading if we want to investigate the processes that extract meaning from
language. Moreover, moving the attention from sentence as an abstract and
formal entity to discourse as a concrete and shaping context is a good way to
release language from isolation and consider it on the basis of its relationship
with other cognitive processes, in an interdisciplinary framework.

Communicative activity, generally, does not rely on the exchange of
isolated information but on the construction and transmission of meaningful
and coherent sequences of sentences. In spite of the evidence, in cognitive
science, for a long time, the study of discourse has received very little
attention.

Why?

According to most of the interpretative models developed in
the field of cognitive science, the proposition represents the essence of
language. Since proposition belongs to the domain of syntactic analysis,
assuming that proposition is the essence of language is equivalent to sustain
that sintax represents the core of linguistic processing.
In pragmatics there is a widespread agreement on the idea that syntactic
and semantic processing constitute just one side of the coin. The other side is
represented by additional ‘contextual’ factors that help to fix the final
interpretation of a sentence.

In fact, in order to comprehend the speaker’s
meaning, listeners are required to perform two basic tasks: decoding what is
said (semantic meaning) and understanding what is meant (pragmatic
meaning).

In other terms, “pragmatic theories agree in considering meaning as
comprising a semantic component (the meaning of what is said) and a
pragmatic component (the meaning derived by what is intended by the
speaker).

Both the processes involved in the unification of the two components
and the time-course of these processes are, however, still under debate”
(Balconi, 2010, p. 96).

The debate, in particular, is between supporters of a
“two step model” and supporters of a “one step model,” borrowing Hagoort’s
expressions (Hagoort, 2007). The first group argues that these two processes
are accomplished in a serial fashion – with semantic meaning processed first
and pragmatic meaning processed in a delayed time – while the second group
predicts an earlier interaction of linguistic and contextual information in order
to obtain a complete representation of what is meant by the speaker.


The two step model originates from Oxford philosopher Herbert Paul Grice’s distinction between
 
(a) “what is said” (literal meaning) and
 
(b) “what is implicitly meant” (pragmatic meaning)
(e.g. Grice, 1975, 1989).
 
To grasp the speaker’s communicative intentions,
listeners are required to pass first through the comprehension of literal
meaning. If their expectations are not attended (i.e. if the conversational
maxims are not respected) at the explicit level, then inferential processes
intervene to adjust literal meaning on the basis of linguistic and extra-linguistic
context.

Cognitive versions of the Gricean model predict that comprehension
process occurs in multiple stages:

(a) language module elaborates semantic
meaning

(b) the output of language module is related to contextual
information,

(c1) if there is agreement between the two outputs, the process
stops while

(c2) if there is no agreement, a mechanism of contextual
adjustment is activated. In the last case the processing time of language
comprehension increases (Bambini, 2003, p. 137).

Agreeing with the two step model implies accepting the idea that sentence
processing occurs always before discourse processing because, in this view,
the contextual constraints conveyed by the text are considered only after that
the literal meaning of the utterance is computed.

Cutler and Clifton (1999),
for example, state that, based on syntactic analysis and thematic processing,
utterance interpretation takes place first and integration into a discourse
model follows.

In line with these considerations, Lattner and Friederici (2003)
claim that mismatches between spoken message and speaker’s intentions are
detected relatively late, in slow pragmatic computations, that are different from
rapid semantic computations in which word meanings are combined.


According to Hagoort (2007), a model such this still embraces a
“syntactocentric perspective” which perceives sintax as the central aspect of
language (e.g. Chomsky, 1980).

It is possible to sum up this perspective in
two assumptions:

(1) The truly relevant aspects of language are coded in
syntax,

(2) The semantic interpretation of an expression is derived from its
syntactic structure (Hagoort, 2007, p. 801).

The heaviest consequence of this
inheritance is that language analysis continues to focus on the sentence first,
leaving the discourse behind.

The theoretical background of the one step model, instead, lies in the
immediacy assumption, formulated by Just and Carpenter in 1980, that states
that linguistic information relative to the single words together with the
linguistic and extralinguistic contextual information, concur, from the
beginning, to determine the meaning of the incoming words.

At a cognitive
level, having immediately access to all information at one’s disposal means, in
concrete terms, to bypass the stage of the literal processing. The focus of
attention is, in fact, on the effects of the context and the way it interacts with
the rest of the linguistic information. In line with this idea, a first extension of
the role of pragmatic processes has been made by relevance theorists: pragmatic processes concern the determination of both what is said and what is meant.

According to the relevance theory, the main aim
of inferential pragmatics is to detect speakers’ communicative intentions since
the processing of the literal meaning of an utterance is not sufficient to
determine what the speakers desire to communicate (underdeterminacy
thesis).
In the next paragraph we will see how experimental techniques can
contribute positively to the debate, showing that the one step model is more
appropriate than the two step model to suit the evidence provided by the study
of the working brain.


A good deal of experimental data in favor of the one step model is offered by
Gibbs’ various work on reading times (Gibbs, 1989, 2002, 2004).

Gibbs’
reading times data showed that linguistic and contextual information interact
early on to ensure the construction of contextually appropriate meanings and
the inhibition of contextually inappropriate ones. In other words, when given
enough contextual information, as in the ecological setting, listeners are able
to directly access the correct interpretation of what is said, without elaborating
conventional (but not appropriate) sentence meaning.

If reading-time experiments tend to concentrate on the processing of figurative
language, electrophysiological studies face the question of discourse
processing in a more direct way.

Electrophysiological studies, for more than
twenty years, have focused only on the processing of sentences rather than on
discourse.

According to Van Berkum, the reasons for this radical choice lie in
historical, social and concrete motives:


One reason is that psycholinguistic ERP research is for historical reasons
strongly rooted in the sentence processing community. This means that most of
the people with EEG expertise and easy access to EEG labs have sentence
processing issues in mind, whereas those most interested in discourse and
conversation are short of expertise and labs.

Furthermore combining EEG with
single sentences is already difficult enough as it is. Because at least 30-40 trials
are needed per condition to obtain a relatively clean ERP, factorial sentencelevel
EEG experiments require the presentation on many lengthy trials, as well
as sometimes months of work to create the materials.

Another problem is that
within each of these lengthy trials, people are not supposed to move their eyes,
head or body. With a longer fragment of text or conversation in each trial, all
this is only going to get worse.

In recent years, the fall of most of the ideological and practical obstacles has
finally allowed electrophysiology to approach the discourse with fruitful
results. For instance, the study of N400 component of the event-related
potentials (ERP)1 that, at first, was very useful to throw light on sentence
processing, in a second moment found a large application even in the field of
discourse.

Kutas and Hillyard (1980) were the first to observe this negativegoing
potential, comparing ERPs recordings to the last word of sentences that
either ended congruously (1) or incongruously (2):

1. I take my coffee with cream and sugar

2. I take my coffee with cream and dog

The authors found negativity in the brainwaves that was much larger for
incongruous sentence completions than for the congruous ones.

Because it
peaked about 400 milliseconds after the onset of the presentation of the word,
this negativity was called the N400. Since its original discovery, much has been
learned about the processing nature of the N400. In particular, as Hagoort and
Brown (1994) observed, the N400 effect does not rely on semantic violation.
For example, subtle differences in semantic expectancy, as between mouth and
pocket in the sentence context

“Jenny put the sweet in her mouth/pocket after
the lesson”,

can also modulate the N400 amplitude (Hagoort & Brown,
1994).

Specifically, as the degree of semantic fit between a word and its
context increases, the amplitude of the N400 goes down. Owing to such subtle
modulations, the word-elicited N400 is generally viewed as reflecting the
process that integrate the meaning of a word into the overall meaning
representation constructed by the preceding language input (Hagoort, 2007).
Among the pioneer works that applied the study of N400 component to
discourse processing figures the one of St George, Mannes and Hoffman
(1994), aimed to investigate whether the N400 is sensitive to global, as well as
local, semantic expectancy.

Global coherence refers to the ease with which
subjects can relate the current proposition they are reading with theme-related
ideas. In this study, the effect of global coherence on event-related brain
potentials was tested using four titled and untitled paragraphs, presented one
word at a time.

These paragraphs are non-coherent and are made coherent only
through the presentation of a title. The EEG was recorded in response to every
word in all four paragraphs. An example:

The procedure is actually quite simple. First you arrange things into different


The N400 components is a negative-going wave that peaks approximately 400 ms after the onset of
the stimulus and has a centro-parietal distribution (evident over the back of the head) which is slighty
larger over the right hemisphere (Kutas, Van Petten & Besson, 1988).
groups depending on their makeup. Of course, one pile may be sufficient
depending on how much there is to do. If you have to go somewhere else due to
lack of facilities that is the next step, otherwise you are pretty well set. It is
important not to overdo any particular endeavor.

That is, it is better to do too
few things at once than too many. In the shorter run this may not seem
important, but complications from doing too many can easily arise. A mistake
can be expensive as well. The manipulation of the appropriate mechanisms
should be self-explanatory, and we need not dwell on it here. At first the whole
procedure will seem complicated. Soon, however, it will become just another
facet of life. It is difficult to foresee any end to the necessity of this task in the
immediate future, but then one can never tell (St, George, Mannes & Hoffman,
1994 cited in Van Berkum, in press).

Whereas the story appears locally coherent in that its individual sentences
are interconnected and related to a single topic, it is rather difficult to
understand what it is about. When the story is provided with a title, however,
the subject becomes immediately clear (in this case, the title was “Procedure
for washing clothes”).

The ERP recordings, in fact, showed an increase in
N400 amplitude in response to the words in the Untitled paragraphs relative to
the Titled paragraphs, indicating that global coherence does affect the N400.
Building on this initial exploration, Van Berkum and colleagues (1999,
2003, 2008, 2009) performed Kutas and Hillyard’ experiment (1980) on a
large scale (micro-discourses compounded by two or more sentences). In
particular, they examined the brain’s response to words that were equally
acceptable in their local carrier sentence (i.e., 1a and 1b) but differed radically
in how well they fit the wider discourse (i.e., 2a and 2b) as in:

1. Jane told her brother that he was exceptionally…

a) Quick

b) Slow

2. By five in the morning, Jane’s brother had already showered and had even
gotted dressed. Jane told her brother that he was exceptionally…

a) Quick

b) Slow


Van Berkum and colleagues found that words which elicit N400s of
approximately equal amplitude in an isolated sentence (i.e., 1) do not elicit
equivalent N400s when they occur in a context that makes one version more
plausible that the other (i.e., 2).

Specifically, relative to the discourse-coherent
counterpart (i.e. quick), the discourse-anomalous words (i.e. slow) elicited a
larger N400 effect. Furthermore it is worthy to note that the discoursedependent
N400 effect emerged for clause-final words as well as for clausemedial
words. This means that every incoming word is immediately related to
the wider discourse. Furthermore, with spoken words (Van Berkum et al.,
2003), the effect of discourse-level fit emerged as early as 150 ms after
acoustic word onset, (i.e., only some 2-3 phonemes into the word). This
suggests that spoken words are actually related to the wider discourse
extremely rapidly, well before they have been fully pronounced, and possibly
even before they have become acoustically unique.

Finally, the timing, shape
and scalp distribution of the N400 effect elicited by discourse-dependent
anomalies did not differ from that of the ‘classic’ sentence-dependent N400
effect. This indicates that discourse and sentence-dependent semantic
constraints are brought to bear on comprehension as part of the same unified
interpretation process (Van Berkum, in press).

The relevance of identical sentence- and discourse-dependent anomaly
effects would of course be somewhat limited if the commonality simply
reflected some common error detection process, activated by two otherwise
very different comprehension processes. However, it has long been know that
the word-elicited N400 effect is not a simple anomaly detector, but a reliable
index of the ease with which lexical meaning is integrated into the wider
sentential context (Kutas & Van Petten, 1994). In line with this, Otten and
Van Berkum (2005) showed that in a sentence such as:


3. The brave knight saw that the dragon threatened the benevolent sorcerer. He
quickly reached for a:

a) Sword
b) Lance

relative to highly expected words in discourse (e.g., “sword”), words that are
merely somewhat less expected (e.g., “lance”) also elicit a N400 effect.


Until now, none evidence has been found in support of the standard model
according to which new words are related to the discourse model only after
they have been evaluated in terms of their contribution to local sentence
semantics. On the contrary, evidence from the N400 consistently indicates
that words are related immediately to the wider discourse and in a way that is no
different from how they are related to local sentence-level context. This
accords well with the models of language comprehension that do not make a
distinction between the computation of sentence- and discourse-level
meaning.

Considerations such as these bring into question the traditional and
well accepted idea that discourse-related information is not instantly available
and must be retrieved from memory when needed (Ericsson & Kintsch, 1995).


The relevant discourse information can sometimes be brought to bear on local
processing within a mere 150 ms after spoken word onset. This indication
appears to be at odds with estimates of how long it would take to retrieve
information about prior discourse from long-term memory, i.e., 300-400 ms
at least (Hagoort, 2007).

Fancy stories constitute a clear evidence of the power of discourse to
determine meaning because when knowledge of the real world is not useful to
make sense of the incoming words, the alternative way is to call upon the rest of
the story to find out what it is going on. Indeed, in cases such as these, the
immediate integration of lexical-semantic information into a discourse model is
particularly clear. Evidence regarding this has efficiently been provided by
Nieuwland & Van Berkum (2006).

They had subjects listening to short stories
in which the inanimate protagonist was attributed with different animacy
characteristics.2 For instance, one of these stories was about a peanut in love:
A woman saw a dancing peanut who had a big smile on his face. The peanut was
singing about a girl he had just met. And judging from the song, the peanut was
totally crazy about her. The woman thought it was really cute to see the peanut
singing and dancing like that. The peanut was salted/in love , and by the sound
of it, this was definitively mutual. He was seeing a little almond.


The canonical inanimate predicate (i.e., salted) for this inanimate object
(i.e, peanut) elicited a larger N400 than the locally anomalous, but
contextually appropriate predicate (i.e., in love). These results show that
2 Animacy is the classification of nouns, and the things these words refer, based on the degree
to which they are “alive” or animate.

discourse context can completely overrule constraints provided by animacy, a
feature claimed to be part of the evolutionary hardwired aspects of conceptual
knowledge (Caramazza & Shelton, 1998) and often mentioned as a prime
example of the semantic primitives involved in the computation of context-free
sentence meaning. Therefore we agree with Van Berkum when he says that
what primarily seems to matter is how things fit what is being talked about right
now, be in the real world or in a fancy world of happy peanuts (Van Berkum,
2008, p. 377).

 

The observed identity of discourse- and sentence-level N400 effects can be
accounted for in terms of a processing model that abandons the distinction
between sentence and discourse.

One viable way to do this, according to
Hagoort (2007), is by invoking the notion of ‘common ground’ (see Clark,
1992 for a discussion about the definition of common ground). Linguistic
analyses have demonstrated that the meaning of utterances cannot be
determined without taking into account the knowledge that speaker and
listener share and mutually believe they share such as information that comes
from the bases of community membership, physical co-presence, and linguistic
co-presence.

For example, conversational participants would be able to infer
that they share various types of knowledge on the basis of both being in a
particular city, or by looking at a particular object at the same time.
Now we know, from electrophysiological evidences, that in the notion of
common ground we should also include a model of discourse which is
continually updated as the discourse unfolds. With a single sentence, the
relevant common ground only includes whatever discourse and world
knowledge has just been activated by the sentence fragment presented so far.
With a sentence embedded in a discourse context, the relevant common
ground will be somewhat richer, now also including information elicited by the
specific earlier discourse. But the unification process that integrates incoming
words with the relevant common ground should not really care about where the
interpretative constraints came from (Hagoort, 2007, p. 803).
According to an impressive analogy coined by McCarthy (1994),
processing the discourse is like watching an impressionist painting. When you
stop looking for strokes and brushworks, you can grasp the global meaning of
what is represented. What are the advantages from taking the landscape of the
text as our starting point rather than focusing on its constituent forms? First of
all, we are compelled to recognize that such a landscape is not just an
assemblage of linguistic strokes but a coherent entity purposefully
constructed.

Moreover, “the moment one starts to think of language as
discourse the entire landscape changes, usually forever”(McCarthy, 1994, p.
201). Admiring the beauty of the composition, instead of focusing on the
single strokes of the brush, obviously, is not a strategy to reduce the
importance of the components but merely a way of seeing how each of them
contributes to the entire project of the painting.


In the same vein, focusing on the deeper rather than on the shallow level of
comprehension is not a way of diminishing the relevance of lexical processing
or syntactic parsing at a surface plane. Blurring the boundaries between
sentence and discourse is not intended to deny the relevance of the sentential
structure for semantic interpretation. On the contrary, sentence-level syntactic
devices (such as word order, case marking, local phrase structure or
agreement) and thematic roles constrain the structure of discourse. However,
this is fully compatible with the claim that contextual information conveyed by
discourse are processed in parallel with local sentence meaning.


The scientific study of language has been shaped by the assumption that the
human language faculty evolved for thinking rather than for communicating
(e.g., Chomsky, 1980). This ‘‘language-as-product” tradition takes language
itself as the object of study, focusing on grammatical knowledge and the core
processes for recovering linguistic structure from sentences. As Brennan
states:

“This common focus has given generations of psycholinguists and other
cognitive scientists license to concentrate on the study of the linguistic
representation and processing in the mind and brain of a lone (and largely
generic) native speaker, independent of context. As a result, a great deal is
known about how individuals store, organize, and access knowledge in the
mental lexicon; how individuals parse sentences and resolve syntactic
ambiguity; and how individuals plan and articulate utterances. But there is
more to language processing than these (seemingly) autonomous processes”
(Brennan, 2010, p. 302).

What remains to investigate is what happens in the brain during
communicative processes. This implies, first of all, overcoming the Chomskyan
distinction between competence and performance, “one of the heaviest
burdens for a truly comprehensive approach to language” (Baggio, in press). In
my view, studying performance using experimental tools seems to be the best
way to enlighten the nature of language processing and “if experimental
research provides evidence which does not align with the introspective
judgments of the linguist or other native speakers, then, following common
practice in science, there is no other choice than to accept the results of the
former and reject the latter” (ibidem).

We have claimed that the brain does not seem to honor the classical division
between sentence and discourse. Indeed, electrophysiological data indicate
that there is no qualitative difference between processing a word in a sentence
or processing it in a discursive frame. In both cases, the brain adopts the
biggest frame at its disposal to interpret the word’s meaning:
To the language user, discourse-level processing is simply language-driven
conceptual processing, regardless of whether it occurs in a single sentence or a
longer discourse. And intuitively, this makes sense. Does it really matter, for
example, whether the targeted entity of a free referential pronoun like “he” has
been introduced in the previous sentence or in the current one? (Van Berkum,
in press, p. 16).

Two-step models, following the tradition launched by the Oxford philosopher Herbert Paul Grice, assume that comprehension processes take place in a two-step fashion.

First, the context-free meaning of a
sentence is computed by combining fixed word meanings in ways specified by
the syntax.

Second, the sentence meaning is integrated with information from
prior discourses, world knowledge, information about the speaker and
semantic information from extra-linguistic domains such as co-speech gestures
or the visual world.

Such ideas are not supported by electrophysiological
evidence and consequently are not adequate in light of our understanding of
the principles of brain function.

One-step models, instead, represent the
“neuro-friendly” alternative to two-step models. At the heart of these models
there is the idea that comprehension processes are based on the parallel use of
multiple clues of both a linguistic (phonology, syntax, semantics) and
pragmatic nature (knowledge about the context, the speaker, states of affairs in
the world and the rest of discourse) that operate under unification principles in
order to address the interpretation processes.

In every communicative situation, the brain selects from among the
information at its disposal that which is more suitable to the context and less
expensive from a cognitive point of view. The contextual information has a
double function: on the one hand it is necessary to interpret what has been said
in an appropriate way, on the other hand it allows to anticipate what is going to
be said. Looking forward positively affects the speed and efficiency of the
comprehension processes.

As Van Berkum states “what we see is an
opportunistic proactive brain at work” (Van Berkum, 2008, p. 379), a brain
that seeks, from the first moment, to pick up the communicative intentions of
the speaker without necessarily passing through a literal phase that is often
little informative from a pragmatic point of view.
Establishing the weight to be assigned to the discourse is not a question of
little importance. It determines, for example, which is the place of pragmatics
in relation to other levels of language analysis.

The discussion has two major
opponents; complementary theory and perspective theory. While the first
considers pragmatics as an additional linguistic component, the second
concerns pragmatic competence as a fundamental aspect of a more general
communicative competence (Balconi, 2010).

According to complementary
theory, it is possible to represent linguistic components in a hierarchical
fashion. Along imaginary stairs, discourse, as the “biggest chunk” (Van
Berkum, in press), has to be positioned on the top. Underneath we can find all
the others units, from sentences to phonemes, going through words and
morphemes.

This kind of approach tends to crumble the research object in
separate units to better understand it. The result is a puzzle of pieces waiting to
be connected to each other. If this strategy is fruitful from an analytic point of
view, it is not really useful to understand how communicative processes really
works.

On the other side, perspective theory states that pragmatics is not just a
level of analysis among others, but it is a way to interpret language as a
communicative phenomenon immersed into the contexts at all levels. As we
have seen, electrophysiological data go exactly in this direction, attesting
perspective view as the best way to describe linguistic processes as they really
happen in the brain.


In line with the perspective theory, discourse, intended as the widest
linguistic context at disposal, becomes the unit of reference of every linguistic
exchange.

Given the binding action that discourse exercises on interpretative
processes, it is endowed with cognitive priority, metaphorically representing
the dam of the spoken flux that constantly guides production and
comprehension processes. Interestingly, the distinguishing mark of discourse
is coherence intended as the thematic and conceptual unit of a text. It is
possible to conceive of coherence as the glue thanks to which words and
sentences are stuck together and connected to each other. It is not a
coincidence that the word “text” (from latin, “textus”) alludes to the fact that
the sentences that form the “biggest chunk” are interwoven with each other in
a specific, i.e. in a coherent, way (Simone, 2002, p. 406). In spite of its
importance, coherence has always been considered by linguists out of the
Pillars of Hercules (ivi, p. 449) because it is not just a linguistic phenomenon
but it is situated in a border zone where language interfaces other cognitive
processes such as memory and executive functions (e.g. Ferretti & Adornetti,
2012).

In general terms, studying language as a context-dependent phenomenon
means cutting the distances between language and other cognitive processes:
In its infinite variation, context permeates information processing: regularities
in the way the brain integrates and exploits context might bypass the
distinctions among cognitive modules, while maintaining the distinctiveness of
each faculty.

Indeed, we might be facing a point here where language and other
systems share mechanisms that developed evolutionarily in response to
environmental demands. So, in order to get a full account of processing
pragmatic fact in the brain, one cannot exclude that neuropragmatics should
dialogue with other context-sensitive ‘neuro’disciplines and become even more
interdisciplinary (Bambini, 2010, p. 15).

In the future, the pragmatics of discourse could surely gain important
successes if it will choose to follow the interdisciplinary route. Now that we are
moving away from the “modular era” and we are approaching a new “network
era”, the idea that language shares some mechanisms with other cognitive
processes is becoming so evident that it is not acceptable anymore to consider
language as an isolated system. Indeed, more and more studies, using fMRI or
PET, have proved the existence of a common network shared by discourse
processing and other cognitive processes such as social cognition or spatial
and temporal navigation (e.g, Ferstl, 2008, Spreng, 2008, Ferstl, 2010).
“Now that we can look under the hood of the car” , as Van Berkum states, (Van
Berkum, 2008, p. 379), what remains to do is to go into the conceptual
implications of the experimental data to see what the interaction between
language, cognition and perception can tell us about the nature of language
itself.

REFERENCES


Baggio, G., Van Lambalgen, M., Hagoort, P. (in press).
Language, linguistics and
cognitiion. In M. Stokhof, J. Groenendijk (Eds.), Philosophy of Linguistics.
Elsevier: Amsterdam.
Balconi, M. (2010). From
  Pragmatics To Neuropragmatics. In M. Balconi (Ed.),
Neuropsychology of Communication. Milano: Springer-Verlag, 93-109.
Balconi, M. (2008). Neuropragmatica. Processi, fenomeni e contesti. Roma: Aracne.
Bambini, V. (2010). Neuropragmatics: A foreword. Italian Journal of Linguistics ,
22(1), 1-20.

Bambini, V. (2003). Pragmatica e cervello: guida e stato dell'arte. Quaderni del
Laboratorio di Linguistica della Scuola Normale Superiore , 4, 123-151.
Brennan, S. E., Galati, A., Kuhlen, A. K. (2010). Two Minds, One Dialog:
Coordinating Speaking and Understanding. In B.H. Ross (Ed.), The
Psychology of Learning and Motivation, vol. 53, Burlington: Academic Press,
301-344.

Caramazza, A., & Shelton, J. R. (1998). Domain-specific knowledge systems in the
brain the animate-inanimate distinction. Cognitive Neuroscience, 10, 1-34.
Chomsky, N. (1988). Language and the problems of Knowledge. Cambridge, Mass.:
The MIT Press.
Chomsky, N. (1980). Rules and representation. Behavioural and Brain Sciences , 3
(1), 1-62.
Clark, H. H. (1992). Arenas of language use. Chicago: University of Chicago Press.
Coulson, S. (2006). Constructing Meaning. Metaphor and Symbol, 21(4), 245-266.
Cutler, A., & Clifton, C. E. (1999). Comprehending spoken language: a blueprint of
the listener. In C.M. Brown, & P. Hagoort (Eds.), The Neurocognition of
language, Oxford: Oxford University Press, 123-166.
Ericsson, K.A., & Kintsch, W. (1995). Long-term working memory. Psychological
Review , 102, 211-245.
Ferretti, F. & Adornetti, I. (2012). Dalla comunicazione al linguaggio. Scimmie,
ominidi e umani in una prospettiva darwiniana . Milano: Mondadori.
Ferretti, F. (2010). Alle origini del linguaggio umano. Il punto di vista evoluzionistico.
The Text as a Context 155
Roma-Bari: Laterza.
Ferstl, E.C. (2010). Neuroimaging of text comprehension: Where are we now? Italian
Journal of Linguistics , 22 (1), 61-88.
Ferstl, E.C., Neumann, J., Bogler, C., & Von Cramon, D.Y. (2008). The Extended
Language Network: A Meta-Analysis of Neuroimaging Studies on Text
Comprehension. Human Brain Mapping , 29 (5), 581-593.
Fodor, J.A. (1983). The Modularity of Mind. Cambridge, Mass.: MIT Press.
Giani, A. (2005). I testi e la mente. Lecce: Manni.
Gibbs, R.W. (2002). A New Look at Literal Meaning in Understanding What is Said
and What is Implicated. Journal of Pragmatics, 34(4), 457-486.
Gibbs, R.W. (2004). Psycholinguistic Experiments and Linguistics Pragmatics. In I.
Noveck, & D. Sperber (Eds.), Experimental Pragmatics, New York : Palgrave
MacMillan, 50-71.
Gibbs, R.W., & Gerrig, R. (1989). How Context Makes Metaphor Comprehension
Seem ‘Special’. Metaphor and Symbolic Activity, 4, 145-158.

Grice, H. P. (1938). Negation. The Grice Papers, UC/Berkeley, California.
Grice, H. P. (1948). Meaning.
Grice, H. P. (1961). The causal theory of perception.
Grice, H. P. (1975). Logic and conversation. In P. Cole, & J.L. Morgan (Eds.), Syntax
and Semantics, vol. 3, Speech Acts. New York: Academic Press, 199-219.
Grice, H.P. (1989). Studies in the way of words. Cambridge: Harvard University Press.

Hagoort, P. (2008). Should Psychology Ignore the Language of Brain? Psychological
Science , 2 (17), 96-100.
Hagoort, P., & Brown, C. (1994). Brain responses to lexical ambiguity resolution and
parsing. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence
processing. Hillsdale, NJ: Lawrence Erlbaum Associates, 45-81.
Hagoort, P., & Van Berkum, J.J. (2007). Beyond the sentence given. Philosophical
Transactions of the Royal Society , 362, 801-811.
Hanna, J. E., Tanenhaus, M. K., & Trueswell, J. C. (2003). The effects of common
ground and perspective on domains of referential interpretation. Memory and
Language , 49, 43-61.
Horn, L. R. A brief history of negation.
Just, M. A., & Carpenter, P. A. (1980). A theory of reading: from eye fixation to
comprehension. Psychological Review , 87, 329-354.
Kutas, M., & Federmeier, K. D. (2011). Thirty Years and Counting: Finding Meaning
in the N400 Component of the Event-Related Brain Potentials (ERP). Annual
Review of Psychology , 62, 14.1-14.27.
156 Humana.Mente – Issue 23 – December 2012
Kutas, M., & Van Petten, C. K. (1994). Psycholinguistic electrified: Event-related
brain potential investigations. In M.A. Gernsbacher (Ed.), Handobook of
Psycholinguistics. New York: Academic Press, 83-143.
Kutas, M., Van Petten, C.K., Besson, M. (1988). Event-related potential asymmetries
during the reading of sentences. Electroencephalography and Clinical
Neurophysiology, 69, 218-233.
Kutas, S., & Hylliard, S. A. (1980). Reading senseless sentences: Brain Potentials
reflect Semantic Incongruity. Science, 207(4427), 203-205.
Lattner, S., & Friederici, A. D. (2003). Talker's voice and gender stereotype in human
auditory sentence processing- evidence from event-related brain potentials.
Neuroscience Letters, 339, 191-194.
McCarthy, M., & Carter, R. (1994). Language as discourse. New York: Longman.
Nieuwland, M. S., & Van Berkum, J. J. (2006). When peanuts fall in love: N400
evidences for the power of discourse. Cognitive Neuroscience 18(7), 1098-
1111.
Noveck, I. A., & Sperber, D. (2004). Experimental Pragmatics. New York: Palgrave
MacMillan.
Otten, M., & Van Berkum, J. J. (2005). The influence of message-based predictability
and lexical association on the N400 effect. Annual meeting of the Cognitive
Neuroscience Society (CNS-2005), April 9-12. New York.
Simone, R. (2002). Fondamenti di linguistica. Roma-Bari: Laterza.
Speranza, Join the Grice Club.
Sperber, D., & Wilson, D. (1986). Relevance: Communication and Cognition.
Oxford: Oxford University Press.
Spreng, R. N., Mar, R. A., & Kim, A. S. (2008). The Common Neural Basis of
Autobiographical Memory, Prospection, Navigation, Theory of Mind and the
Default Mode: A Quantitative Meta-Analysis. Journal of Cognitive
Neuroscience , 21(3), 489-510.
St George, M., Mannes, S., & Hoffman, J. (1994). Global semantic expectancy and
language comprehension. Cognitive Neuroscience , 6, 70-83.
Van Berkum, J. J. (in press). The Electrophysiology of Discourse and Conversation. In
M. Spivey, M. Joanisse, & K. McRae (Eds.), The Cambridge Handbook of
Psycholinguistics. Cambridge: Cambridge University Press.
Van Berkum, J. J. (2009). The Neuropragmatics of 'Simple' Utterance
Comprehension: an ERP Review. In U. Sauerland, & K. Yatsushiro (Eds.),
The Text as a Context 157
Semantic and Pragmatics: From Experiment to Theory Basingstoke: Palgrave
MacMillan, 276-316.
Van Berkum, J. J. (2008a). Understanding Sentence in Context. What Brain Waves
Can Tell Us. Psychological Science , 17 (6), 376-380.
Van Berkum, J. J., Hagoort, P., & Brown, C. M. (1999). Semantic integration in
sentences and discourse: Evidence from the N400. Cognitive Science , 11 (6),
657-671.
Van Berkum, J. J., Zwitserlood, P., Brown, C. M., & Hagoort, P. (2003). When and
how listeners relate a sentence to a wider discourse? Evidence from N400
effect. Cognitive Brain Research, 17, 701-718

No comments:

Post a Comment