causation, the relation between
cause and effect, or the act of bringing about an effect, which may be an
event, a state, or an object say, a statue. The concept of causation has long
been recognized as one of fundamental philosophical importance. Hume called it
“the cement of the universe”: causation is the relation that connects events
and objects of this world in significant relationships. The concept of
causation seems pervasively present in human discourse. It is expressed by not
only ‘cause’ and its cognates but by many other terms, such as ‘produce’,
‘bring about’, ‘issue’, ‘generate’, ‘result’, ‘effect’, ‘determine’, and
countless others. Moreover, many common transitive verbs “causatives”, such as
‘kill’, ‘break’, and ‘move’, tacitly contain causal relations e.g., killing involves
causing to die. The concept of action, or doing, involves the idea that the
agent intentionally causes a change in some object or other; similarly, the
concept of perception involves the idea that the object perceived causes in the
perceiver an appropriate perceptual experience. The physical concept of force,
too, appears to involve causation as an essential ingredient: force is the
causal agent of changes in motion. Further, causation is intimately related to
explanation: to ask for an explanation of an event is, often, to ask for its
cause. It is sometimes thought that our ability to make predictions, and
inductive inference in general, depends on our knowledge of causal connections
or the assumption that such connections are present: the knowledge that water
quenches thirst warrants the predictive inference from ‘X is swallowing water’
to ‘X’s thirst will be quenched’. More generally, the identification and
systematic description of causal relations that hold in the natural world have
been claimed to be the preeminent aim of science. Finally, causal concepts play
a crucial role in moral and legal reasoning, e.g., in the assessment of
responsibilities and liabilities. Event causation is the causation of one event
by another. A sequence of causally connected events is called a causal chain.
Agent causation refers to the act of an agent person, object in bringing about
a change; thus, my opening the window i.e., my causing the window to open is an
instance of agent causation. There is a controversy as to whether agent
causation is reducible to event causation. My opening the window seems
reducible to event causation since in reality a certain motion of my arms, an
event, causes the window to open. Some philosophers, however, have claimed that
not all cases of agent causation are so reducible. Substantival causation is
the creation of a genuinely new substance, or object, rather than causing
changes in preexisting substances, or merely rearranging them. The possibility
of substantival causation, at least in the natural world, has been disputed by
some philosophers. Event causation, however, has been the primary focus of
philosophical discussion in the modern and contemporary period. The analysis of
event causation has been controversial. The following four approaches have been
prominent: the regularity analysis, the counterfactual analysis, the
manipulation analysis, and the probabilistic analysis. The heart of the
regularity or nomological analysis, associated with Hume and J. S. Mill, is the
idea that causally connected events must instantiate a general regularity
between like kinds of events. More precisely: if c is a cause of e, there must
be types or kinds of events, F and G, such that c is of kind F, e is of kind G,
and events of kind F are regularly followed by events of kind G. Some take the
regularity involved to be merely de facto “constant conjunction” of the two
event types involved; a more popular view is that the regularity must hold as a
matter of “nomological necessity” i.e.,
it must be a “law.” An even stronger view is that the regularity must represent
a causal law. A law that does this job of subsuming causally connected events
is called a “covering” or “subsumptive” law, and versions of the regularity
analysis that call for such laws are often referred to as the “covering-law” or
“nomic-subsumptive” model of causality. The regularity analysis appears to give
a satisfactory account of some aspects of our causal concepts: for example,
causal claims are often tested by re-creating the event or situation claimed to
be a cause and then observing whether a similar effect occurs. In other
respects, however, the regularity account does not seem to fare so well: e.g.,
it has difficulty explaining the apparent fact that we can have knowledge of
causal relations without knowledge of general laws. It seems possible to know,
for instance, that someone’s contraction of the flu was caused by her exposure
to a patient with the disease, although we know of no regularity between such
exposures and contraction of the disease it may well be that only a very small
fraction of persons who have been exposed to flu patients contract the disease.
Do I need to know general regularities about itchings and scratchings to know
that the itchy sensation on my left elbow caused me to scratch it? Further, not
all regularities seem to represent causal connections e.g., Reid’s example of
the succession of day and night; two successive symptoms of a disease.
Distinguishing causal from non-causal regularities is one of the main problems
confronting the regularity theorist. According to the counterfactual analysis,
what makes an event a cause of another is the fact that if the cause event had
not occurred the effect event would not have. This accords with the idea that
cause is a condition that is sine qua non for the occurrence of the effect. The
view that a cause is a necessary condition for the effect is based on a similar
idea. The precise form of the counterfactual account depends on how
counterfactuals are understood e.g., if counterfactuals are explained in terms
of laws, the counterfactual analysis may turn into a form of the regularity
analysis. The counterfactual approach, too, seems to encounter various
difficulties. It is true that on the basis of the fact that if Larry had
watered my plants, as he had promised, my plants would not have died, I could
claim that Larry’s not watering my plants caused them to die. But it is also
true that if George Bush had watered my plants, they would not have died; but
does that license the claim that Bush’s not watering my plants caused them to
die? Also, there appear to be many cases of dependencies expressed by
counterfactuals that, however, are not cases of causal dependence: e.g., if
Socrates had not died, Xanthippe would not have become a widow; if I had not
raised my hand, I would not have signaled. The question, then, is whether these
non-causal counterfactuals can be distinguished from causal counterfactuals
without the use of causal concepts. There are also questions about how we could
verify counterfactuals in particular,
whether our knowledge of causal counterfactuals is ultimately dependent on
knowledge of causal laws and regularities. Some have attempted to explain
causation in terms of action, and this is the manipulation analysis: the cause
is an event or state that we can produce at will, or otherwise manipulate, to
produce a certain other event as an effect. Thus, an event is a cause of
another provided that by bringing about the first event we can bring about the
second. This account exploits the close connection noted earlier between the
concepts of action and cause, and highlights the important role that knowledge
of causal connections plays in our control of natural events. However, as an
analysis of the concept of cause, it may well have things backward: the concept
of action seems to be a richer and more complex concept that presupposes the
concept of cause, and an analysis of cause in terms of action could be accused
of circularity. The reason we think that someone’s exposure to a flu patient
was the cause of her catching the disease, notwithstanding the absence of an
appropriate regularity even one of high probability, may be this: exposure to
flu patients increases the probability of contracting the disease. Thus, an event,
X, may be said to be a probabilistic cause of an event, Y, provided that the
probability of the occurrence of Y, given that X has occurred, is greater than
the antecedent probability of Y. To meet certain obvious difficulties, this
rough definition must be further elaborated e.g., to eliminate the possibility
that X and Y are collateral effects of a common cause. There is also the
question whether probabilistic causation is to be taken as an analysis of the
general concept of causation, or as a special kind of causal relation, or
perhaps only as evidence indicating the presence of a causal relationship.
Probabilistic causation has of late been receiving increasing attention from
philosophers. When an effect is brought about by two independent causes either
of which alone would have sufficed, one speaks of causal overdetermination.
Thus, a house fire might have been caused by both a short circuit and a
simultaneous lightning strike; either event alone would have caused the fire,
and the fire, therefore, was causally overdetermined. Whether there are actual
instances of overdetermination has been questioned; one could argue that the
fire that would have been caused by the short circuit alone would not have been
the same fire, and similarly for the fire that would have been caused by the
lightning alone. The steady buildup of pressure in a boiler would have caused
it to explode but for the fact that a bomb was detonated seconds before,
leading to a similar effect. In such a case, one speaks of preemptive, or superseding,
cause. We are apt to speak of causes in regard to changes; however,
“unchanges,” e.g., this table’s standing here through some period of time, can
also have causes: the table continues to stand here because it is supported by
a rigid floor. The presence of the floor, therefore, can be called a sustaining
cause of the table’s continuing to stand. A cause is usually thought to precede
its effect in time; however, some have argued that we must allow for the
possibility of a cause that is temporally posterior to its effect backward causation sometimes called
retrocausation. And there is no universal agreement as to whether a cause can
be simultaneous with its effect concurrent
causation. Nor is there a general agreement as to whether cause and effect
must, as a matter of conceptual necessity, be “contiguous” in time and space,
either directly or through a causal chain of contiguous events contiguous causation. The attempt to
“analyze” causation seems to have reached an impasse; the proposals on hand
seem so widely divergent that one wonders whether they are all analyses of one
and the same concept. But each of them seems to address some important aspect
of the variegated notion that we express by the term ‘cause’, and it may be
doubted whether there is a unitary concept of causation that can be captured in
an enlightening philosophical analysis. On the other hand, the centrality of
the concept, both to ordinary practical discourse and to the scientific
description of the world, is difficult to deny. This has encouraged some
philosophers to view causation as a primitive, one that cannot be further
analyzed. There are others who advocate the extreme view causal nihilism that
causal concepts play no role whatever in the advanced sciences, such as fundamental
physical theories of space-time and matter, and that the very notion of cause
is an anthropocentric projection deriving from our confused ideas of action and
power.
causa sui Latin, ‘cause
of itself’, an expression applied to God to mean in part that God owes his
existence to nothing other than himself. It does not mean that God somehow
brought himself into existence. The idea is that the very nature of God
logically requires that he exists. What accounts for the existence of a being
that is causa sui is its own nature.
.
Cavell, Stanley Louis
b.1926, American philosopher whose work has explored skepticism and its
consequences. He was Walter M. Cabot Professor of Aesthetics and General Value
Theory at Harvard from 1963 until 1997. Central to Cavell’s thought is the view
that skepticism is not a theoretical position to be refuted by philosophical
theory or dismissed as a mere misuse of ordinary language; it is a reflection
of the fundamental limits of human knowledge of the self, of others, and of the
external world, limits that must be accepted
in his term “acknowledged”
because the refusal to do so results in illusion and risks tragedy.
Cavell’s work defends J. L. Austin from both positivism and deconstructionism
Must We Mean What We Say?, 1969, and The Pitch of Philosophy, 1994, but not
because Cavell is an “ordinary language” philosopher. Rather, his defense of
Austin has combined with his response to skepticism to make him a philosopher
of the ordinary: he explores the conditions of the possibility and limits of
ordinary language, ordinary knowledge, ordinary action, and ordinary human
relationships. He uses both the resources of ordinary language and the
discourse of philosophers, such as Wittgenstein, Heidegger, Thoreau, and
Emerson, and of the arts. Cavell has explored the ineliminability of skepticism
in Must We Mean What We Say?, notably in its essay on King Lear, and has
developed his analysis in his 1979 magnum opus, The Claim of Reason. He has
examined the benefits of acknowledging the limits of human self-understanding,
and the costs of refusing to do so, in a broad range of contexts from film The
World Viewed, 1971; Pursuits of Happiness, 1981; and Contesting Tears, 1996 to
American philosophy The Senses of Walden, 1972; and the chapters on Emerson in
This New Yet Unapproachable America, 1989, and Conditions Handsome and
Unhandsome, 1990. A central argument in The Claim of Reason develops Cavell’s
approach by looking at Wittgenstein’s notion of criteria. Criteria are not
rules for the use of our words that can guarantee the correctness of the claims
we make by them; rather, criteria bring out what we claim by using the words we
do. More generally, in making claims to knowledge, undertaking actions, and
forming interpersonal relationships, we always risk failure, but it is also
precisely in that room for risk that we find the possibility of freedom. This
argument is indebted not only to Wittgenstein but also to Kant, especially in
the Critique of Judgment. Cavell has used his view as a key to understanding
classics of the theater and film. Regarding such tragic figures as Lear, he
argues that their tragedies result from their refusal to accept the limits of
human knowledge and human love, and their insistence on an illusory absolute
and pure love. The World Viewed argues for a realistic approach to film,
meaning that we should acknowledge that our cognitive and emotional responses
to films are responses to the realities of the human condition portrayed in
them. This “ontology of film” prepared the way for Cavell’s treatment of the
genre of comedies of remarriage in Pursuits of Happiness. It also grounds his
treatment of melodrama in Contesting Tears, which argues that human beings must
remain tragically unknown to each other if the limits to our knowledge of each
other are not acknowledged. In The Claim of Reason and later works Cavell has
also contributed to moral philosophy by his defense against Rawls’s critique of “moral
perfectionism” of “Emersonian
perfectionism”: the view that no general principles of conduct, no matter how
well established, can ever be employed in practice without the ongoing but
never completed perfection of knowledge of oneself and of the others on and
with whom one acts. Cavell’s Emersonian perfectionism is thus another
application of his Wittgensteinian and Kantian recognition that rules must
always be supplemented by the capacity for judgment.
Cavendish, Margaret,
Duchess of Newcastle 16231673, English author of some dozen works in a variety
of forms. Her central philosophical interest was the developments in natural
science of her day. Her earliest works endorsed a kind of atomism, but her
settled view, in Philosophical Letters 1664, Observations upon Experimental
Philosophy 1666, and Grounds of Natural Philosophy 1668, was a kind of organic
materialism. Cavendish argues for a hierarchy of increasingly fine matter,
capable of self-motion. Philosophical Letters, among other matters, raises
problems for the notion of inert matter found in Descartes, and Observations upon
Experimental Philosophy criticizes microscopists such as Hooke for committing a
double error, first of preferring the distortions introduced by instruments to
unaided vision and second of preferring sense to reason.
Celsuslate second century
A.D.?, anti-Christian writer known only as the author of a work called The True
Doctrine Alethes Logos, which is quoted extensively by Origen of Alexandria in
his response, Against Celsuswritten in the late 240s. The True Doctrine is
mainly important because it is the first anti-Christian polemic of which we
have significant knowledge. Origen considers Celsus to be an Epicurean, but he
is uncertain about this. There are no traces of Epicureanism in Origen’s
quotations from Celsus, which indicate instead that he is an eclectic Middle
Platonist of no great originality, a polytheist whose conception of the
“unnameable” first deity transcending being and knowable only by “synthesis,
analysis, or analogy” is based on Plato’s description of the Good in Republic
VI. In accordance with the Timaeus, Celsus believes that God created “immortal
things” and turned the creation of “mortal things” over to them. According to
him, the universe has a providential organization in which humans hold no
special place, and its history is one of eternally repeating sequences of
events separated by catastrophes.
Certainty: cf. H. P.
Grice, “Intention and uncertainty.” the property of being certain, which is
either a psychological property of persons or an epistemic feature of
proposition-like objects e.g., beliefs, utterances, statements. We can say that
a person, S, is psychologically certain that p where ‘p’ stands for a
proposition provided S has no doubt whatsoever that p is true. Thus, a person
can be certain regardless of the degree of epistemic warrant for a proposition.
In general, philosophers have not found this an interesting property to
explore. The exception is Peter Unger, who argued for skepticism, claiming that
1 psychological certainty is required for knowledge and 2 no person is ever
certain of anything or hardly anything. As applied to propositions, ‘certain’
has no univocal use. For example, some authors e.g., Chisholm may hold that a
proposition is epistemically certain provided no proposition is more warranted
than it. Given that account, it is possible that a proposition is certain, yet
there are legitimate reasons for doubting it just as long as there are equally
good grounds for doubting every equally warranted proposition. Other
philosophers have adopted a Cartesian account of certainty in which a
proposition is epistemically certain provided it is warranted and there are no
legitimate grounds whatsoever for doubting it. Both Chisholm’s and the
Cartesian characterizations of epistemic certainty can be employed to provide a
basis for skepticism. If knowledge entails certainty, then it can be argued
that very little, if anything, is known. For, the argument continues, only
tautologies or propositions like ‘I exist’ or ‘I have beliefs’ are such that
either nothing is more warranted or there are absolutely no grounds for doubt.
Thus, hardly anything is known. Most philosophers have responded either by
denying that ‘certainty’ is an absolute term, i.e., admitting of no degrees, or
by denying that knowledge requires certainty Dewey, Chisholm, Wittgenstein, and
Lehrer. Others have agreed that knowledge does entail absolute certainty, but
have argued that absolute certainty is possible e.g., Moore. Sometimes
‘certain’ is modified by other expressions, as in ‘morally certain’ or ‘metaphysically
certain’ or ‘logically certain’. Once again, there is no universally accepted
account of these terms. Typically, however, they are used to indicate degrees
of warrant for a proposition, and often that degree of warrant is taken to be a
function of the type of proposition under consideration. For example, the
proposition that smoking causes cancer is morally certain provided its warrant
is sufficient to justify acting as though it were true. The evidence for such a
proposition may, of necessity, depend upon recognizing particular features of
the world. On the other hand, in order for a proposition, say that every event
has a cause, to be metaphysically certain, the evidence for it must not depend
upon recognizing particular features of the world but rather upon recognizing
what must be true in order for our world to be the kind of world it is i.e., one having causal connections. Finally,
a proposition, say that every effect has a cause, may be logically certain if
it is derivable from “truths of logic” that do not depend in any way upon
recognizing anything about our world. Since other taxonomies for these terms
are employed by philosophers, it is crucial to examine the use of the terms in
their contexts.
Chang Hsüeh-ch’eng, philosopher
who devised a dialectical theory of civilization in which beliefs, practices,
institutions, and arts developed in response to natural necessities. This
process reached its zenith several centuries before Confucius, who is unique in
being the sage destined to record this moment. Chang’s teaching, “the Six
Classics are all history,” means the classics are not theoretical statements
about the tao Way but traces of it in operation. In the ideal age, a unity of
chih government and chiao teaching prevailed; there were no private disciplines
or schools of learning and all writing was anonymous, being tied to some
official function. Later history has meandered around this ideal, dominated by
successive ages of philosophy, philology, and literature.
Chang Tsai 10201077, Chinese
philosopher, a major Neo-Confucian figure whose Hsi-ming “Western Inscription”
provided much of the metaphysical basis for Neo-Confucian ethics. It argues
that the cosmos arose from a single source, the t’ai chi Supreme Ultimate, as
undifferentiated ch’i ether took shape out of an inchoate, primordial state,
t’ai-hsü the supremely tenuous. Thus the universe is fundamentally one. The
sage “realizes his oneness with the universe” but, appreciating his particular
place and role in the greater scheme, expresses his love for it in a graded
fashion. Impure endowments of ch’i prevent most people from seeing the true
nature of the world. They act “selfishly” but through ritual practice and
learning can overcome this and achieve sagehood.
character, the comprehensive
set of ethical and intellectual dispositions of a person. Intellectual
virtues like carefulness in the
evaluation of evidence promote, for one,
the practice of seeking truth. Moral or ethical virtues including traits like courage and generosity dispose persons not only to choices and
actions but also to attitudes and emotions. Such dispositions are generally
considered relatively stable and responsive to reasons. Appraisal of character
transcends direct evaluation of particular actions in favor of examination of
some set of virtues or the admirable human life as a whole. On some views this
admirable life grounds the goodness of particular actions. This suggests
seeking guidance from role models, and their practices, rather than relying
exclusively on rules. Role models will, at times, simply perceive the salient
features of a situation and act accordingly. Being guided by role models
requires some recognition of just who should be a role model. One may act out
of character, since dispositions do not automatically produce particular
actions in specific cases. One may also have a conflicted character if the
virtues one’s character comprises contain internal tensions between, say,
tendencies to impartiality and to friendship. The importance of formative
education to the building of character introduces some good fortune into the
acquisition of character. One can have a good character with a disagreeable
personality or have a fine personality with a bad character because personality
is not typically a normative notion, whereas character is.
Charron: p., theologian
who became the principal expositor of Montaigne’s ideas, presenting them in
didactic form. His first work, The Three Truths 1595, presented a negative
argument for Catholicism by offering a skeptical challenge to atheism,
nonChristian religions, and Calvinism. He argued that we cannot know or
understand God because of His infinitude and the weakness of our faculties. We
can have no good reasons for rejecting Christianity or Catholicism. Therefore,
we should accept it on faith alone. His second work, On Wisdom 1603, is a
systematic presentation of Pyrrhonian skepticism coupled with a fideistic
defense of Catholicism. The skepticism of Montaigne and the Grecian skeptics is
used to show that we cannot know anything unless God reveals it to us. This is
followed by offering an ethics to live by, an undogmatic version of Stoicism.
This is the first modern presentation of a morality apart from any religious
considerations. Charron’s On Wisdom was extremely popular in France and
England. It was read and used by many philosophers and theologians during the
seventeenth century. Some claimed that his skepticism opened his defense of
Catholicism to question, and suggested that he was insincere in his fideism. He
was defended by important figures in the French Catholic church.
cheapest-cost avoider, in
the economic analysis of law, the party in a dispute that could have prevented
the dispute, or minimized the losses arising from it, with the lowest loss to itself.
The term encompasses several types of behavior. As the lowest-cost accident
avoider, it is the party that could have prevented the accident at the lowest
cost. As the lowest-cost insurer, it is the party that could been have insured
against the losses arising from the dispute. This could be the party that could
have purchased insurance at the lowest cost or self-insured, or the party best
able to appraise the expected losses and the probability of the occurrence. As
the lowest-cost briber, it is the party least subject to transaction costs.
This party is the one best able to correct any legal errors in the assignment
of the entitlement by purchasing the entitlement from the other party. As the
lowest-cost information gatherer, it is the party best able to make an informed
judgment as to the likely benefits and costs of an action.
Ch’en Hsien-chang
14281500, Chinese poet philosopher. In the early Ming dynasty Chu Hsi’s
li-hsüeh learning of principles had been firmly established as the orthodoxy
and became somewhat fossilized. Ch’en opposed this trend and emphasized
“self-attained learning” by digging deep into the self to find meaning in life.
He did not care for book learning and conceptualization, and chose to express
his ideas and feelings through poems. Primarily a Confucian, he also drew from
Buddhism and Taoism. He was credited with being the first to realize the depth
and subtlety of hsin-hsüeh learning of the mind, later developed into a
comprehensive philosophy by Wang Yang-ming.
ch’eng, Chinese term
meaning ‘sincerity’. It means much more than just a psychological attitude.
Mencius barely touched upon the subject; it was in the Confucian Doctrine of
the Mean that the idea was greatly elaborated. The ultimate metaphysical
principle is characterized by ch’eng, as it is true, real, totally beyond
illusion and delusion. According to the classic, sincerity is the Way of
Heaven; to think how to be sincere is the Way of man; and only those who can be
absolutely sincere can fully develop their nature, after which they can assist
in the transforming and nourishing process of Heaven and Earth. MENCIUS. S.-H.L. Ch’eng Hao 103285, Ch’eng Yi
10331107, Chinese philosophers, brothers who established mature Neo-Confucianism.
They elevated the notion of li pattern to preeminence and systematically linked
their metaphysics to central ethical notions, e.g. hsing nature and hsin
heart/mind. Ch’eng Hao was more mystical and a stronger intuitionist. He
emphasized a universal, creative spirit of life, jen benevolence, which
permeates all things, just as ch’i ether/vital force permeates one’s body, and
likened an “unfeeling” i.e., unbenevolent person to an “unfeeling” i.e.,
paralyzed person. Both fail to realize a unifying “oneness.” Ch’eng Yi
presented a more detailed and developed philosophical system in which the li
pattern in the mind was awakened by perceiving the li in the world,
particularly as revealed in the classics, and by t’ui extending/inferring their
interconnections. If one studies with ching reverential attentiveness, one can
gain both cognitively accurate and affectively appropriate charity, principle
of Ch’eng Hao, Ch’eng Yi 131 131 “real
knowledge,” which Ch’eng Yi illustrates with an allegory about those who “know”
i.e., have heard that tigers are dangerous and those who “know” because they
have been mauled. The two brothers differ most in their views on
self-cultivation. For Ch’eng Hao, it is more an inner affair: setting oneself
right by bringing into full play one’s moral intuition. For Ch’eng Yi,
self-cultivation was more external: chih chih extending knowledge through ko wu
investigating things. Here lie the beginnings of the major schools of
Neo-Confucianism: the LuWang and Ch’engChu schools.
LI1, NEO-CONFUCIANISM.
P.J.I. cheng ming, also called Rectification of Names, a Confucian program of
language reform advocating a return to traditional language. There is a brief
reference to cheng ming in Analects 13:3, but Hsün Tzu presents the most
detailed discussion of it. While admitting that new words ming will sometimes
have to be created, Hsün Tzu fears the proliferation of words, dialects, and
idiolects will endanger effective communication. He is also concerned that new
ways of speaking may lend themselves to sophistry or fail to serve such purposes
as accurately distinguishing the noble from the base.
CONFUCIANISM. B.W.V.N.
Cheng-shih hsüan-hsüeh.NEO-TAOISM. ch’i, Chinese term for ether, air, corporeal
vital energy, and the “atmosphere” of a season, person, event, or work. Ch’i
can be dense/impure or limpid/pure, warm/rising/active or cool/settling/still.
The brave brim with ch’i; a coward lacks it. Ch’i rises with excitement or
health and sinks with depression or illness. Ch’i became a concept coordinate
with li pattern, being the medium in which li is embedded and through which it
can be experienced. Ch’i serves a role akin to ‘matter’ in Western thought, but
being “lively” and “flowing,” it generated a distinct and different set of
questions. P.J.I. Chiao Hung 1540?1620, Chinese historian and philosopher
affiliated with the T’ai-chou school, often referred to as the left wing of
Wang Yang-ming’s hsin-hsüeh learning of the mind. However, he did not repudiate
book learning; he was very erudite, and became a forerunner of evidential
research. He believed in the unity of the teachings of Confucianism, Buddhism,
and Taoism. In opposition to Chu Hsi’s orthodoxy he made use of insights of
Ch’an Zen Buddhism to give new interpretations to the classics. Learning for
him is primarily and ultimately a process of realization in consciousness of
one’s innate moral nature.
BUDDHISM, CHU HSI,
NEO-CONFUCIANISM, WANG YANG-MING. S.-h.L. & A.K.L.C.
Chia Yi 200168 B.C.,
Chinese scholar who attempted to synthesize Legalist, Confucian, and Taoist
ideas. The Ch’in dynasty 221206 B.C. used the Legalist practice to unify China,
but unlimited use of cruel punishment also caused its quick downfall; hence the
Confucian system of li propriety had to be established, and the emperor had to
delegate his power to able ministers to take care of the welfare of the people.
The ultimate Way for Chia Yi is hsü emptiness, a Taoist idea, but he
interpreted it in such a way that it is totally compatible with the practice of
li and the development of culture.
CONFUCIANISM, TAOISM. S.-h.L.
ch’ien, k’un, in traditional Chinese cosmology, the names of the two most
important trigrams in the system of I-Ching the Book of Changes. Ch’ien S is
composed of three undivided lines, the symbol of yang, and k’un S S three
divided lines, the symbol of yin. Ch’ien means Heaven, the father, creativity;
k’un means Earth, the mother, endurance. The two are complementary; they work
together to form the whole cosmic order. In the system of I-Ching, there are
eight trigrams, the doubling up of two trigrams forms a hexagram, and there are
a total of sixtyfour hexagrams. The first two hexagrams are also named ch’ien S
S and k’un S S S S. T’AICHI. S.-h.L.
chien ai.MOHISM. Ch’ien-fu Lun, Chinese title of Comments of a Recluse second
century A.D., a Confucian political and cosmological work by Wang Fu. Divided
into thirty-six essays, it gives a vivid picture of the sociopolitical world of
later Han China and prescribes practical measures to overcome corruption and
other problems confronting the state. There are discussions on cosmology
affirming the belief that the world is constituted by vital energy ch’i. The
pivotal role of human beings in shaping the world is emphasized. A person may
be favorably endowed, but education remains crucial. Several essays address the
perceived excesses in religious practices. Above all, the author targets for
criticism the system of official appointment that privileges family backcheng
ming Ch’ien-fu Lun 132 132 ground and
reputation at the expense of moral worth and ability. Largely Confucian in
outlook, the work reflects strong utilitarian interest reminiscent of Hsün
Tzu. CH’I,
CONFUCIANISM. A.K.L.C.
Ch’ien Mu 18951990, Chinese historian, a leading contemporary New Confucian
scholar and cofounder with T’ang Chün-i of New Asia in Hong Kong 1949. Early in his career he was
respected for his effort to date the ancient Chinese philosophers and for his
study of Confucian thought in the Han dynasty 206 B.C.A.D. 220. During World
War II he wrote the Outline of Chinese History, in which he developed a
nationalist historical viewpoint stressing the vitality of traditional Chinese
culture. Late in his career he published his monumental study of Chu Hsi
11301200. He firmly believed the spirit of Confucius and Chu Hsi should be
revived today.
CHINESE PHILOSOPHY, CHU
HSI, T’ANG CHÜN-I. S.-h.L. chih1, Chinese term roughly corresponding to
‘knowledge’. A concise explanation is found in the Hsün Tzu: “That in man by
which he knows is called chih; the chih that accords with actuality is called
wisdom chih.” This definition suggests a distinction between intelligence or
the ability to know and its achievement or wisdom, often indicated by its
homophone. The later Mohists provide more technical definitions, stressing
especially the connection between names and objects. Confucians for the most
part are interested in the ethical significance of chih. Thus chih, in the
Analects of Confucius, is often used as a verb in the sense ‘to realize’,
conveying understanding and appreciation of ethical learning, in addition to
the use of chih in the sense of acquiring information. And one of the basic
problems in Confucian ethics pertains to chih-hsing ho-i the unity of knowledge
and action.
CONFUCIANISM, MOHISM.
A.S.C. chih2, Chinese term often translated as ‘will’. It refers to general
goals in life as well as to more specific aims and intentions. Chih is supposed
to pertain to the heart/mind hsin and to be something that can be set up and
attained. It is sometimes compared in Chinese philosophical texts to aiming in
archery, and is explained by some commentators as “directions of the
heart/mind.” Confucians emphasize the need to set up the proper chih to guide
one’s behavior and way of life generally, while Taoists advocate letting
oneself respond spontaneously to situations one is confronted with, free from
direction by chih.
CONFUCIANISM. K.-l.S.
chih-hsing ho-i, Chinese term for the Confucian doctrine, propounded by Wang
Yang-ming, of the unity of knowledge and action. The doctrine is sometimes
expressed in terms of the unity of moral learning and action. A recent
interpretation focuses on the non-contingent connection between prospective and
retrospective moral knowledge or achievement. Noteworthy is the role of desire,
intention, will, and motive in the mediation of knowledge and action as
informed by practical reasonableness in reflection that responds to changing
circumstances. Wang’s doctrine is best construed as an attempt to articulate
the concrete significance of jen, the NeoConfucian ideal of the universe as a
moral community. A.S.C. Chillington, Richard.
KILVINGTON. Chinese
Legalism, the collective views of the Chinese “school of laws” theorists, so
called in recognition of the importance given to strict application of laws in
the work of Shang Yang 390338 B.C. and his most prominent successor, Han Fei
Tzu d. 223 B.C.. The Legalists were political realists who believed that
success in the context of Warring States China 403221 B.C. depended on
organizing the state into a military camp, and that failure meant nothing less
than political extinction. Although they challenged the viability of the
Confucian model of ritually constituted community with their call to law and
order, they sidestepped the need to dispute the ritual-versus-law positions by
claiming that different periods had different problems, and different problems
required new and innovative solutions. Shang Yang believed that the fundamental
and complementary occupations of the state, agriculture and warfare, could be
prosecuted most successfully by insisting on adherence to clearly articulated
laws and by enforcing strict punishments for even minor violations. There was
an assumed antagonism between the interests of the individual and the interests
of the state. By manipulating rewards and punishments and controlling the
“handles of life and death,” the ruler could subjugate his people and bring
them into compliance with the national purpose. Law would replace morality and
function as the exclusive standard of good. Fastidious application of the law,
with severe punishments for infractions, was believed to be a policy that
Ch’ien Mu Chinese Legalism 133 133
would arrest criminality and quickly make punishment unnecessary. Given that
the law served the state as an objective and impartial standard, the goal was
to minimize any reliance upon subjective interpretation. The Legalists thus
conceived of the machinery of state as operating automatically on the basis of
self-regulating and self-perpetuating “systems.” They advocated techniques of statecraft
shu such as “accountability” hsing-ming, the demand for absolute congruency
between stipulated duties and actual performance in office, and “doing nothing”
wu-wei, the ruler residing beyond the laws of the state to reformulate them
when necessary, but to resist reinterpreting them to accommodate particular
cases. Han Fei Tzu, the last and most influential spokesperson of Legalism,
adapted the military precept of strategic advantage shih to the rule of
government. The ruler, without the prestige and influence of his position, was
most often a rather ordinary person. He had a choice: he could rely on his
personal attributes and pit his character against the collective strength of
his people, or he could tap the collective strength of the empire by using his
position and his exclusive power over life and death as a fulcrum to ensure
that his will was carried out. What was strategic advantage in warfare became
political purchase in the government of the state. Only the ruler with the
astuteness and the resolve to hoard and maximize all of the advantages
available to him could guarantee continuation in power. Han Fei believed that
the closer one was to the seat of power, the greater threat one posed to the
ruler. Hence, all nobler virtues and sentiments
benevolence, trust, honor, mercy
were repudiated as means for conspiring ministers and would-be usurpers
to undermine the absolute authority of the throne. Survival was dependent upon
total and unflagging distrust. FA,
HAN FEI TZU, SHANG YANG.
R.P.P. & R.T.A.
Chinese philosophy,
philosophy produced in China from the sixth century B.C. to the present.
Traditional Chinese philosophy. Its history may be divided into six periods: 1
Pre-Ch’in, before 221 B.C. Spring and Autumn, 722481 B.C. Warring States, 403222
B.C. 2 Han, 206 B.C.A.D. 220 Western Former Han, 206 B.C.A.D. 8 Hsin, A.D. 923
Eastern Later Han, A.D. 25220 3 Wei-Chin, 220420 Wei, 22065 Western Chin,
265317 Eastern Chin, 317420 4 Sui-Tang, 581907 Sui, 581618 Tang, 618907 Five
Dynasties, 90760 5 Sung-Yüan-Ming, 9601644 Northern Sung, 9601126 Southern
Sung, 11271279 Yuan Mongol, 12711368 Ming, 13681644 6 Ch’ing Manchu, 16441912
In the late Chou dynasty 1111249 B.C., before Ch’in 221206 B.C. unified the
country, China entered the so-called Spring and Autumn period and the Warring
States period, and Chou culture was in decline. The so-called hundred schools
of thought were contending with one another; among them six were
philosophically significant: a Ju-chia Confucianism, represented by Confucius
551479 B.C., Mencius 371 289 B.C.?, and Hsün Tzu fl. 298238 B.C. b Tao-chia
Taoism, represented by Lao Tzu sixth or fourth century B.C. and Chuang Tzu
between 399 and 295 B.C. c Mo-chia Mohism, represented by Mo Tzu fl. 479438
B.C. d Ming-chia Logicians, represented by Hui Shih 380305 B.C., Kung-sun Lung
b.380 B.C.? e Yin-yang-chia Yinyang school, represented by Tsou Yen 305240
B.C.? f Fa-chia Legalism, represented by Han Fei d. 233 B.C. Thus, China
enjoyed her first golden period of philosophy in the Pre-Ch’in period. As most
Chinese philosophies were giving responses to existential problems then, it is
no wonder Chinese philosophy had a predominantly practical character. It has
never developed the purely theoretical attitude characteristic of Grecian
philosophy. During the Han dynasty, in 136 B.C., Confucianism was established
as the state ideology. But it was blended with ideas of Taoism, Legalism, and
the Yinyang school. An organic view of the universe was developed; creative
thinking was replaced by study of the so-called Five Classics: Book of Poetry,
Book of History, Book of Changes, Book of Rites, and Spring and Autumn Annals.
As the First Emperor of Ch’in burned the Classics except Chinese philosophy
Chinese philosophy 134 134 for the
I-Ching, in the early Han scholars were asked to write down the texts they had
memorized in modern script. Later some texts in ancient script were discovered,
but were rejected as spurious by modern-script supporters. Hence there were
constant disputes between the modern-script school and the ancient-script
school. Wei-Chin scholars were fed up with studies of the Classics in trivial
detail. They also showed a tendency to step over the bounds of rites. Their
interest turned to something more metaphysical; the Lao Tzu, the Chuang Tzu,
and the I-Ching were their favorite readings. Especially influential were
Hsiang Hsiu’s fl. A.D. 250 and Kuo Hsiang’s d. A.D. 312 Commentaries on the
Chuang Tzu, and Wang Pi’s 22649 Commentaries on the Lao Tzu and I-Ching.
Although Wang’s perspective was predominantly Taoist, he was the first to brush
aside the hsiang-shu forms and numbers approach to the study of the I-Ching and
concentrate on i-li meanings and principles alone. Sung philosophers continued
the i-li approach, but they reinterpreted the Classics from a Confucian
perspective. Although Buddhism was imported into China in the late Han period,
it took several hundred years for the Chinese to absorb Buddhist insights and
ways of thinking. First the Chinese had to rely on ko-i matching the concepts
by using Taoist ideas to transmit Buddhist messages. After the Chinese learned
a great deal from Buddhism by translating Buddhist texts into Chinese, they
attempted to develop the Chinese versions of Buddhism in the SuiTang period. On
the whole they favored Mahayana over Hinayana Theravada Buddhism, and they
developed a much more life-affirming attitude through Hua-yen and T’ien-tai
Buddhism, which they believed to represent Buddha’s mature thought. Ch’an went
even further, seeking sudden enlightenment instead of scripture studies. Ch’an,
exported to Japan, has become Zen, a better-known term in the West. In response
to the Buddhist challenge, the Neo-Confucian thinkers gave a totally new
interpretation of Confucian philosophy by going back to insights implicit in
Confucius’s so-called Four Books: the Analects, the Mencius, The Great
Learning, and the Doctrine of the Mean the latter two were chapters taken from
the Book of Rites. They were also fascinated by the I-Ching. They borrowed
ideas from Buddhism and Taoism to develop a new Confucian cosmology and moral
metaphysics. SungMing Neo-Confucianism brought Chinese philosophy to a new
height; some consider the period the Chinese Renaissance. The movement started
with Chou Tun-i 101773, but the real founders of Neo-Confucianism were the
Ch’eng brothers: Ch’eng Hao 103285 and Ch’eng Yi 10331107. Then came Chu Hsi
11301200, a great synthesizer often compared with Thomas Aquinas or Kant in the
West, who further developed Ch’eng Yi’s ideas into a systematic philosophy and
originated the so-called Ch’engChu school. But he was opposed by his younger
contemporary Lu Hsiang-shan 113993. During the Ming dynasty, Wang Yang-ming
14721529 reacted against Chu Hsi by reviving the insight of Lu Hsiang-shan,
hence the so-called LuWang school. During the Ch’ing dynasty, under the rule of
the Manchus, scholars turned to historical scholarship and showed little
interest in philosophical speculation. In the late Ch’ing, K’ang Yu-wei
18581927 revived the modern-script school, pushed for radical reform, but
failed miserably in his attempt. Contemporary Chinese philosophy. Three
important trends can be discerned, intertwined with one another: the
importation of Western philosophy, the dominance of Marxism on Mainland China,
and the development of contemporary New Confucian philosophy. During the early
twentieth century China awoke to the fact that traditional Chinese culture
could not provide all the means for China to enter into the modern era in
competition with the Western powers. Hence the first urgent task was to learn
from the West. Almost all philosophical movements had their exponents, but they
were soon totally eclipsed by Marxism, which was established as the official
ideology in China after the Communist takeover in 1949. Mao Tse-tung 18931976
succeeded in the line of Marx, Engels, Lenin, and Stalin. The Communist regime
was intolerant of all opposing views. The Cultural Revolution was launched in
1967, and for a whole decade China closed her doors to the outside world. Almost
all the intellectuals inside or outside of the Communist party were purged or
suppressed. After the Cultural Revolution was over, universities were reopened
in 1978. From 1979 to 1989, intellectuals enjoyed unprecedented freedom. One
editorial in People’s Daily News said that Marx’s ideas were the product of the
nineteenth century and did not provide all the answers for problems at the
present time, and hence it was desirable to develop Marxism further. Such a
message was interpreted by scholars in different ways. Although the thoughts
set forth by scholChinese philosophy Chinese philosophy 135 135 ars lacked depth, the lively atmosphere
could be compared to the May Fourth New Culture Movement in 1919. Unfortunately,
however, violent suppression of demonstrators in Peking’s Tiananmen Square in
1989 put a stop to all this. Control of ideology became much stricter for the
time being, although the doors to the outside world were not completely closed.
As for the Nationalist government, which had fled to Taiwan in 1949, the
control of ideology under its jurisdiction was never total on the island;
liberalism has been strong among the intellectuals. Analytic philosophy,
existentialism, and hermeneutics all have their followers; today even
radicalism has its attraction for certain young scholars. Even though
mainstream Chinese thought in the twentieth century has condemned the Chinese
tradition altogether, that tradition has never completely died out. In fact the
most creative talents were found in the contemporary New Confucian movement,
which sought to bring about a synthesis between East and West. Among those who
stayed on the mainland, Fung Yu-lan 18951990 and Ho Lin 190292 changed their
earlier views after the Communist takeover, but Liang Sou-ming 18931988 and
Hsiung Shih-li 18851968 kept some of their beliefs. Ch’ien Mu 18951990 and Tang
Chün-i 190978 moved to Hong Kong and Thomé H. Fang 18991976, Hsü Fu-kuan
190382, and Mou Tsung-san 190995 moved to Taiwan, where they exerted profound
influence on younger scholars. Today contemporary New Confucianism is still a
vital intellectual movement in Hong Kong, Taiwan, and overseas; it is even
studied in Mainland China. The New Confucians urge a revival of the traditional
spirit of jen humanity and sheng creativity; at the same time they turn to the
West, arguing for the incorporation of modern science and democracy into
Chinese culture. The New Confucian philosophical movement in the narrower sense
derived inspiration from Hsiung Shih-li. Among his disciples the most original
thinker is Mou Tsung-san, who has developed his own system of philosophy. He
maintains that the three major Chinese traditions Confucian, Taoist, and Buddhist agree in asserting that humans have the
endowment for intellectual intuition, meaning personal participation in tao the
Way. But the so-called third generation has a much broader scope; it includes
scholars with varied backgrounds such as Yu Ying-shih b. 1930, Liu Shu-hsien b.
1934, and Tu Wei-ming b.1940, whose ideas have impact on intellectuals at large
and whose selected writings have recently been allowed to be published on the
mainland. The future of Chinese philosophy will still depend on the
interactions of imported Western thought, Chinese Marxism, and New
Confucianism.
BUDDHISM, CHU HSI,
CONFUCIANISM, HSIUNG SHIH-LI, NEO-CONFUCIANISM, TAOISM, WANG YANG-MING. S.-h.L.
Chinese room argument.SEARLE.
ching, Chinese term meaning ‘reverence’,
‘seriousness’, ‘attentiveness’, ‘composure’. In early texts, ching is the
appropriate attitude toward spirits, one’s parents, and the ruler; it was
originally interchangeable with another term, kung respect. Among
Neo-Confucians, these terms are distinguished: ching reserved for the inner
state of mind and kung for its outer manifestations. This distinction was part
of the Neo-Confucian response to the quietistic goal of meditative calm
advocated by many Taoists and Buddhists. Neo-Confucians sought to maintain an
imperturbable state of “reverential attentiveness” not only in meditation but
throughout all activity. This sense of ching is best understood as a
Neo-Confucian appropriation of the Ch’an Zen ideal of yi-hsing san-mei
universal samadhi, prominent in texts such as the Platform Sutra. P.J.I.
ch’ing, Chinese term meaning 1 ‘essence’, ‘essential’; 2 ‘emotion’, ‘passions’.
Originally, the ch’ing of x was the properties without which x would cease to
be the kind of thing that it is. In this sense it contrasts with the nature
hsing of x: the properties x has if it is a flourishing instance of its kind.
By the time of Hsün Tzu, though, ch’ing comes to refer to human emotions or
passions. A list of “the six emotions” liu ch’ing soon became fairly standard:
fondness hao, dislike wu, delight hsi, anger nu, sadness ai, and joy le.
B.W.V.N. Chisholm, Roderick Milton 191699, influential American philosopher
whose publications spanned the field, including ethics and the history of
philosophy. He is mainly known as an epistemologist, metaphysician, and
philosopher of mind. In early opposition to powerful forms of reductionism,
such as phenomenalism, extensionalism, and physicalism, Chisholm developed an
original philosophy of his own. Educated at Brown and Harvard Ph.D., 1942, he
spent nearly his entire career at Brown. Chinese room argument Chisholm, Roderick
Milton 136 136 He is known chiefly for
the following contributions. a Together with his teacher and later his
colleague at Brown, C. J. Ducasse, he developed and long defended an adverbial
account of sensory experience, set against the sense-datum act-object account
then dominant. b Based on deeply probing analysis of the free will problematic,
he defended a libertarian position, again in opposition to the compatibilism
long orthodox in analytic circles. His libertarianism had, moreover, an unusual
account of agency, based on distinguishing transeunt event causation from
immanent agent causation. c In opposition to the celebrated linguistic turn of
linguistic philosophy, he defended the primacy of intentionality, a defense
made famous not only through important papers, but also through his extensive
and eventually published correspondence with Wilfrid Sellars. d Quick to
recognize the importance and distinctiveness of the de se, he welcomed it as a
basis for much de re thought. e His realist ontology is developed through an
intentional concept of “entailment,” used to define key concepts of his system,
and to provide criteria of identity for occupants of fundamental categories. f
In epistemology, he famously defended forms of foundationalism and internalism,
and offered a delicately argued dissolution of the ancient problem of the
criterion. The principles of Chisholm’s epistemology and metaphysics are not
laid down antecedently as hard-and-fast axioms. Lacking any inviolable
antecedent privilege, they must pass muster in the light of their consequences
and by comparison with whatever else we may find plausible. In this regard he
sharply contrasts with such epistemologists as Popper, with the skepticism of
justification attendant on his deductivism, and Quine, whose stranded
naturalism drives so much of his radical epistemology and metaphysics. By
contrast, Chisholm has no antecedently set epistemic or metaphysical
principles. His philosophical views develop rather dialectically, with
sensitivity to whatever considerations, examples, or counterexamples reflection
may reveal as relevant. This makes for a demanding complexity of elaboration,
relieved, however, by a powerful drive for ontological and conceptual
economy.
choice sequence, a
variety of infinite sequence introduced by L. E. J. Brouwer to express the
non-classical properties of the continuum the set of real numbers within
intuitionism. A choice sequence is determined by a finite initial segment
together with a “rule” for continuing the sequence. The rule, however, may
allow some freedom in choosing each subsequent element. Thus the sequence might
start with the rational numbers 0 and then ½, and the rule might require the n
! 1st element to be some rational number within ½n of the nth choice, without any
further restriction. The sequence of rationals thus generated must converge to
a real number, r. But r’s definition leaves open its exact location in the
continuum. Speaking intuitionistically, r violates the classical law of
trichotomy: given any pair of real numbers e.g., r and ½, the first is either
less than, equal to, or greater than the second. From the 1940s Brouwer got
this non-classical effect without appealing to the apparently nonmathematical
notion of free choice. Instead he used sequences generated by the activity of
an idealized mathematician the creating subject, together with propositions
that he took to be undecided. Given such a proposition, P e.g. Fermat’s last theorem that for n 2 there is no general method of
finding triplets of numbers with the property
that the sum of each of the first two raised to the nth power is equal to the
result of raising the third to the nth power or Goldbach’s conjecture that
every even number is the sum of two prime numbers we can modify the definition of r: The n !
1st element is ½ if at the nth stage of research P remains undecided. That
element and all its successors are ½ ! ½n if by that stage P is proved; they
are ½ † ½n if P is refuted. Since he held that there is an endless supply of
such propositions, Brouwer believed that we can always use this method to
refute classical laws. In the early 1960s Stephen Kleene and Richard Vesley
reproduced some main parts of Brouwer’s theory of the continuum in a formal
system based on Kleene’s earlier recursion-theoretic interpretation of
intuitionism and of choice sequences. At about the same time but in a different and occasionally
incompatible vein Saul Kripke formally
captured the power of Brouwer’s counterexamples without recourse to recursive
functions and without invoking either the creating subject or the notion of
free choice. chit choice sequence 137
137 Subsequently Georg Kreisel, A. N. Troelstra, Dirk Van Dalen, and
others produced formal systems that analyze Brouwer’s basic assumptions about
open-futured objects like choice sequences.
Chomsky, Noam b.1928,
preeminent American linguist, philosopher, and political activist who has spent
his professional career at the Massachusetts Institute of Technology. Chomsky’s
best-known scientific achievement is the establishment of a rigorous and
philosophically compelling foundation for the scientific study of the grammar
of natural language. With the use of tools from the study of formal languages,
he gave a far more precise and explanatory account of natural language grammar
than had previously been given Syntactic Structures, 1957. He has since
developed a number of highly influential frameworks for the study of natural
language grammar e.g., Aspects of the Theory of Syntax, 1965; Lectures on
Government and Binding, 1981; The Minimalist Program, 1995. Though there are
significant differences in detail, there are also common themes that underlie
these approaches. Perhaps the most central is that there is an innate set of
linguistic principles shared by all humans, and the purpose of linguistic
inquiry is to describe the initial state of the language learner, and account
for linguistic variation via the most general possible mechanisms. On Chomsky’s
conception of linguistics, languages are structures in the brains of individual
speakers, described at a certain level of abstraction within the theory. These
structures occur within the language faculty, a hypothesized module of the
human brain. Universal Grammar is the set of principles hard-wired into the language
faculty that determine the class of possible human languages. This conception
of linguistics involves several influential and controversial theses. First,
the hypothesis of a Universal Grammar entails the existence of innate
linguistic principles. Secondly, the hypothesis of a language faculty entails
that our linguistic abilities, at least so far as grammar is concerned, are not
a product of general reasoning processes. Finally, and perhaps most
controversially, since having one of these structures is an intrinsic property
of a speaker, properties of languages so conceived are determined solely by
states of the speaker. On this individualistic conception of language, there is
no room in scientific linguistics for the social entities determined by linguistic
communities that are languages according to previous anthropological
conceptions of the discipline. Many of Chomsky’s most significant contributions
to philosophy, such as his influential rejection of behaviorism “Review of
Skinner’s Verbal Behavior,” Language, 1959, stem from his elaborations and
defenses of the above consequences cf. also Cartesian Linguistics, 1966;
Reflections on Language, 1975; Rules and Representations, 1980; Knowledge of
Language, 1986. Chomsky’s philosophical writings are characterized by an
adherence to methodological naturalism, the view that the mind should be
studied like any other natural phenomenon. In recent years, he has also argued
that reference, in the sense in which it is used in the philosophy of language,
plays no role in a scientific theory of language “Language and Nature,” Mind,
1995.
Chou Tun-yi 101773,
Chinese Neo-Confucian philosopher. His most important work, the T’aichi
t’u-shuo “Explanations of the Diagram of the Supreme Ultimate”, consists of a
chart, depicting the constituents, structure, and evolutionary process of the
cosmos, along with an explanatory commentary. This work, together with his
T’ungshu “Penetrating the I-Ching“, introduced many of the fundamental ideas of
Neo-Confucian metaphysics. Consequently, heated debates arose concerning Chou’s
diagram, some claiming it described the universe as arising out of wu non-being
and thus was inspired by and supported Taoism. Chou’s primary interest was
always cosmological; he never systematically related his metaphysics to ethical
concerns.
ch’üan, Chinese term for
a key Confucian concept that may be rendered as meaning ‘weighing of
circumstances’, ‘exigency’, or ‘moral discretion’. A metaphorical extension of
the basic sense of a steelyard for measuring weight, ch’üan essentially
pertains to assessment of the imporChomsky, Noam ch’üan 138 138 tance of moral considerations to a
current matter of concern. Alternatively, the exercise of ch’üan consists in a
judgment of the comparative importance of competing options answering to a
current problematic situation. The judgment must accord with li principle,
reason, i.e., be a principled or reasoned judgment. In the sense of exigency,
ch’üan is a hard case, i.e., one falling outside the normal scope of the operation
of standards of conduct. In the sense of ‘moral discretion’, ch’üan must
conform to the requirement of i rightness.
Chuang Tzu, also called
Chuang Chou 4th century B.C., Chinese Taoist philosopher. According to many
scholars, ideas in the inner chapters chapters 1 to 7 of the text Chuang Tzu
may be ascribed to the person Chuang Tzu, while the other chapters contain
ideas related to his thought and later developments of his ideas. The inner
chapters contain dialogues, stories, verses, sayings, and brief essays geared
toward inducing an altered perspective on life. A realization that there is no
neutral ground for adjudicating between opposing judgments made from different
perspectives is supposed to lead to a relaxation of the importance one attaches
to such judgments and to such distinctions as those between right and wrong,
life and death, and self and others. The way of life advocated is subject to
different interpretations. Parts of the text seem to advocate a way of life not
radically different from the conventional one, though with a lessened emotional
involvement. Other parts seem to advocate a more radical change; one is
supposed to react spontaneously to situations one is confronted with, with no
preconceived goals or preconceptions of what is right or proper, and to view
all occurrences, including changes in oneself, as part of the transformation
process of the natural order.
Chu Hsi 11301200,
Neo-Confucian scholar of the Sung dynasty 9601279, commonly regarded as the
greatest Chinese philosopher after Confucius and Mencius. His mentor was Ch’eng
Yi 10331107, hence the so-called Ch’engChu School. Chu Hsi developed Ch’eng
Yi’s ideas into a comprehensive metaphysics of li principle and ch’i material
force. Li is incorporeal, one, eternal, and unchanging, always good; ch’i is
physical, many, transitory, and changeable, involving both good and evil. They
are not to be mixed or separated. Things are composed of both li and ch’i. Chu
identifies hsing human nature as li, ch’ing feelings and emotions as ch’i, and
hsin mind/heart as ch’i of the subtlest kind, comprising principles. He
interprets ko-wu in the Great Learning to mean the investigation of principles
inherent in things, and chih-chih to mean the extension of knowledge. He was
opposed by Lu Hsiang-shan 1139 93 and Wang Yang-ming 14721529, who argued that
mind is principle. Mou Tsung-san thinks that Lu’s and Wang’s position was
closer to Mencius’s philosophy, which was honored as orthodoxy. But Ch’eng and
Chu’s commentaries on the Four Books were used as the basis for civil service
examinations from 1313 until the system was abolished in 1905.
chung, shu, Chinese
philosophical terms important in Confucianism, meaning ‘loyalty’ or
‘commitment’, and ‘consideration’ or ‘reciprocity’, respectively. In the
Analects, Confucius observes that there is one thread running through his way
of life, and a disciple describes the one thread as constituted by chung and
shu. Shu is explained in the text as not doing to another what one would not
have wished done to oneself, but chung is not explicitly explained. Scholars
interpret chung variously as a commitment to having one’s behavior guided by
shu, as a commitment to observing the norms of li rites to be supplemented by
shu, which humanizes and adds a flexibility to the observance of such norms, or
as a strictness in observing one’s duties toward superiors or equals to be
supplemented by shu, which involves considerateness toward inferiors or equals,
thereby humanizing and adding a flexibility to the application of rules
governing one’s treatment of them. The pair of terms continued to be used by
later Confucians to refer to supplementary aspects of the ethical ideal or
self-cultivation process; e.g., some used chung to refer to a full
manifestation of one’s originally good heart/mind hsin, and shu to refer to the
extension of that heart/mind to others.
Chung-yung, a portion of
the Chinese Confucian classic Book of Rites. The standard English title of the
Chung-yung composed in the third or second century B.C. is The Doctrine of the
Mean, but Centrality and Commonality is more accurate. Although frequently
treated as an independent classic from quite early in its history, it did not
Chuang Tzu Chung-yung 139 139 receive
canonical status until Chu Hsi made it one of the Four Books. The text is a
collection of aphorisms and short essays unified by common themes. Portions of
the text outline a virtue ethic, stressing flexible response to changing
contexts, and identifying human flourishing with complete development of the
capacities present in one’s nature hsing, which is given by Heaven t’ien. As is
typical of Confucianism, virtue in the family parallels political virtue.
chün-tzu, Chinese term
meaning ‘gentleman’, ‘superior man’, ‘noble person’, or ‘exemplary individual’.
Chün-tzu is Confucius’s practically attainable ideal of ethical excellence. A
chün-tzu, unlike a sheng sage, is one who exemplifies in his life and conduct a
concern for jen humanity, li propriety, and i rightness/righteousness. Jen
pertains to affectionate regard to the well-being of one’s fellows in the
community; li to ritual propriety conformable to traditional rules of proper
behavior; and i to one’s sense of rightness, especially in dealing with
changing circumstances. A chün-tzu is marked by a catholic and neutral attitude
toward preconceived moral opinions and established moral practices, a concern
with harmony of words and deeds. These salient features enable the chün-tzu to
cope with novel and exigent circumstances, while at the same time heeding the
importance of moral tradition as a guide to conduct.
Church, Alonzo 190395,
American logician, mathematician, and philosopher, known in pure logic for his
discovery and application of the Church lambda operator, one of the central
ideas of the Church lambda calculus, and for his rigorous formalizations of the
theory of types, a higher-order underlying logic originally formulated in a
flawed form by Whitehead and Russell. The lambda operator enables direct,
unambiguous, symbolic representation of a range of philosophically and
mathematically important expressions previously representable only ambiguously
or after elaborate paraphrasing. In philosophy, Church advocated rigorous
analytic methods based on symbolic logic. His philosophy was characterized by
his own version of logicism, the view that mathematics is reducible to logic,
and by his unhesitating acceptance of higherorder logics. Higher-order logics,
including second-order, are ontologically rich systems that involve
quantification of higher-order variables, variables that range over properties,
relations, and so on. Higher-order logics were routinely used in foundational
work by Frege, Peano, Hilbert, Gödel, Tarski, and others until around World War
II, when they suddenly lost favor. In regard to both his logicism and his
acceptance of higher-order logics, Church countered trends, increasingly
dominant in the third quarter of the twentieth century, against reduction of
mathematics to logic and against the so-called “ontological excesses” of
higher-order logic. In the 1970s, although admired for his high standards of
rigor and for his achievements, Church was regarded as conservative or perhaps
even reactionary. Opinions have softened in recent years. On the computational
and epistemological sides of logic Church made two major contributions. He was
the first to articulate the now widely accepted principle known as Church’s
thesis, that every effectively calculable arithmetic function is recursive. At
first highly controversial, this principle connects intuitive, epistemic,
extrinsic, and operational aspects of arithmetic with its formal, ontic,
intrinsic, and abstract aspects. Church’s thesis sets a purely arithmetic outer
limit on what is computationally achievable. Church’s further work on Hilbert’s
“decision problem” led to the discovery and proof of Church’s theorem basically that there is no computational
procedure for determining, of a finite-premised first-order argument, whether
it is valid or invalid. This result contrasts sharply with the previously known
result that the computational truth-table method suffices to determine the
validity of a finite-premised truthfunctional argument. Church’s thesis at once
highlights the vast difference between propositional logic and first-order logic
and sets an outer limit on what is achievable by “automated reasoning.”
Church’s mathematical and philosophical writings are influenced by Frege,
especially by Frege’s semantic distinction between sense and reference, his
emphasis on purely syntactical treatment of proof, and his doctrine that
sentences denote are names of their truth-values.
Churchland, Patricia
Smith b.1943, Canadianborn American philosopher and advocate of
neurophilosophy. She received her B.Phil. from Oxford in 1969 and held positions
at the Unichün-tzu Churchland, Patricia Smith 140 140 versity of Manitoba and the Institute
for Advanced Studies at Princeton, settling at the ofCalifornia,SanDiego, with
appointments in philosophy and the Institute for Neural Computation. Skeptical
of philosophy’s a priori specification of mental categories and dissatisfied
with computational psychology’s purely top-down approach to their function,
Churchland began studying the brain at the
of Manitoba medical school. The result was a unique merger of science
and philosophy, a “neurophilosophy” that challenged the prevailing methodology
of mind. Thus, in a series of articles that includes “Fodor on Language
Learning” 1978 and “A Perspective on Mind-Brain Research” 1980, she outlines a
new neurobiologically based paradigm. It subsumes simple non-linguistic
structures and organisms, since the brain is an evolved organ; but it preserves
functionalism, since a cognitive system’s mental states are explained via
high-level neurofunctional theories. It is a strategy of cooperation between
psychology and neuroscience, a “co-evolutionary” process eloquently described
in Neurophilosophy 1986 with the prediction that genuine cognitive phenomena
will be reduced, some as conceptualized within the commonsense framework,
others as transformed through the sciences. The same intellectual confluence is
displayed through Churchland’s various collaborations: with psychologist and
computational neurobiologist Terrence Sejnowski in The Computational Brain
1992; with neuroscientist Rodolfo Llinas in The Mind-Brain Continuum 1996; and
with philosopher and husband Paul Churchland in On the Contrary 1998 she and
Paul Churchland are jointly appraised in R. McCauley, The Churchlands and Their
Critics, 1996. From the viewpoint of neurophilosophy, interdisciplinary
cooperation is essential for advancing knowledge, for the truth lies in the
intertheoretic details.
Churchland, Paul M.
b.1942, Canadian-born American philosopher, leading proponent of eliminative
materialism. He received his Ph.D. from the
of Pittsburgh in 1969 and held positions at the Universities of Toronto,
Manitoba, and the Institute for Advanced Studies at Princeton. He is professor
of philosophy and member of the Institute for Neural Computation at the of California, San Diego. Churchland’s
literary corpus constitutes a lucidly written, scientifically informed
narrative where his neurocomputational philosophy unfolds. Scientific Realism
and the Plasticity of Mind 1979 maintains that, though science is best construed
realistically, perception is conceptually driven, with no observational given,
while language is holistic, with meaning fixed by networks of associated usage.
Moreover, regarding the structure of science, higher-level theories should be
reduced by, incorporated into, or eliminated in favor of more basic theories
from natural science, and, in the specific case, commonsense psychology is a
largely false empirical theory, to be replaced by a non-sentential,
neuroscientific framework. This skepticism regarding “sentential” approaches is
a common thread, present in earlier papers, and taken up again in “Eliminative
Materialism and the Propositional Attitudes” 1981. When fully developed, the
non-sentential, neuroscientific framework takes the form of connectionist
network or parallel distributed processing models. Thus, with essays in A
Neurocomputational Perspective 1989, Churchland adds that genuine psychological
processes are sequences of activation patterns over neuronal networks.
Scientific theories, likewise, are learned vectors in the space of possible
activation patterns, with scientific explanation being prototypical activation
of a preferred vector. Classical epistemology, too, should be
neurocomputationally naturalized. Indeed, Churchland suggests a semantic view
whereby synonymy, or the sharing of concepts, is a similarity between patterns
in neuronal state-space. Even moral knowledge is analyzed as stored prototypes
of social reality that are elicited when an individual navigates through other
neurocomputational systems. The entire picture is expressed in The Engine of
Reason, the Seat of the Soul 1996 and, with his wife Patricia Churchland, by
the essays in On the Contrary 1998. What has emerged is a neurocomputational
embodiment of the naturalist program, a panphilosophy that promises to capture
science, epistemology, language, and morals in one broad sweep of its
connectionist net.
Church’s thesis, the
thesis, proposed by Alonzo Church at a meeting of the American Mathematical
Society in April 1935, “that the notion of an effectively calculable function
of positive inteChurchland, Paul M. Church’s thesis 141 141 gers should be identified with that of a
recursive function. . . .” This proposal has been called Church’s thesis ever
since Kleene used that name in his Introduction to Metamathematics 1952. The
informal notion of an effectively calculable function effective procedure, or
algorithm had been used in mathematics and logic to indicate that a class of
problems is solvable in a “mechanical fashion” by following fixed elementary
rules. Underlying epistemological concerns came to the fore when modern logic
moved in the late nineteenth century from axiomatic to formal presentations of
theories. Hilbert suggested in 1904 that such formally presented theories be
taken as objects of mathematical study, and metamathematics has been pursued
vigorously and systematically since the 1920s. In its pursuit, concrete issues
arose that required for their resolution a delimitation of the class of
effective procedures. Hilbert’s important Entscheidungsproblem, the decision
problem for predicate logic, was one such issue. It was solved negatively by
Church and Turing relative to the
precise notion of recursiveness; the result was obtained independently by
Church and Turing, but is usually called Church’s theorem. A second significant
issue was the general formulation of the incompleteness theorems as applying to
all formal theories satisfying the usual representability and derivability
conditions, not just to specific formal systems like that of Principia
Mathematica. According to Kleene, Church proposed in 1933 the identification of
effective calculability with l-definability. That proposal was not published at
the time, but in 1934 Church mentioned it in conversation to Gödel, who judged
it to be “thoroughly unsatisfactory.” In his Princeton Lectures of 1934, Gödel
defined the concept of a recursive function, but he was not convinced that all
effectively calculable functions would fall under it. The proof of the equivalence
between l-definability and recursiveness by Church and Kleene led to Church’s
first published formulation of the thesis as quoted above. The thesis was
reiterated in Church’s “An Unsolvable Problem of Elementary Number Theory”
1936. Turing introduced, in “On Computable Numbers, with an Application to the
Entscheidungsproblem” 1936, a notion of computability by machines and
maintained that it captures effective calculability exactly. Post’s paper
“Finite Combinatory Processes, Formulation 1” 1936 contains a model of
computation that is strikingly similar to Turing’s. However, Post did not
provide any analysis; he suggested considering the identification of effective
calculability with his concept as a working hypothesis that should be verified
by investigating ever wider formulations and reducing them to his basic
formulation. The classic papers of Gödel, Church, Turing, Post, and Kleene are
all reprinted in Davis, ed., The Undecidable, 1965. In his 1936 paper Church
gave one central reason for the proposed identification, namely that other
plausible explications of the informal notion lead to mathematical concepts
weaker than or equivalent to recursiveness. Two paradigmatic explications,
calculability of a function via algorithms or in a logic, were considered by
Church. In either case, the steps taken in determining function values have to
be effective; and if the effectiveness of steps is, as Church put it,
interpreted to mean recursiveness, then the function is recursive. The
fundamental interpretative difficulty in Church’s “step-by-step argument” which
was turned into one of the “recursiveness conditions” Hilbert and Bernays used
in their 1939 characterization of functions that can be evaluated according to
rules was bypassed by Turing. Analyzing human mechanical computations, Turing
was led to finiteness conditions that are motivated by the human computer’s
sensory limitations, but are ultimately based on memory limitations. Then he
showed that any function calculable by a human computer satisfying these
conditions is also computable by one of his machines. Both Church and Gödel
found Turing’s analysis convincing; indeed, Church wrote in a 1937 review of
Turing’s paper that Turing’s notion makes “the identification with
effectiveness in the ordinary not explicitly defined sense evident
immediately.” This reflective work of partly philosophical and partly
mathematical character provides one of the fundamental notions in mathematical
logic. Indeed, its proper understanding is crucial for judging the philosophical
significance of central metamathematical results like Gödel’s incompleteness theorems or
Church’s theorem. The work is also crucial for computer science, artificial
intelligence, and cognitive psychology, providing in these fields a basic theoretical
notion. For example, Church’s thesis is the cornerstone for Newell and Simon’s
delimitation of the class of physical symbol systems, i.e. universal machines
with a particular architecture; see Newell’s Physical Symbol Systems 1980.
Newell views the delimitation “as the most fundamental contribution of
artificial intelligence and computer science to the joint enterprise of
cognitive science.” In a turn that had been taken by Turing in “Intelligent
Machinery” 1948 and “ComputChurch’s thesis Church’s thesis 142 142 ing Machinery and Intelligence” 1950,
Newell points out the basic role physical symbol systems take on in the study
of the human mind: “the hypothesis is that humans are instances of physical
symbol systems, and, by virtue of this, mind enters into the physical universe.
. . . this hypothesis sets the terms on which we search for a scientific theory
of mind.”
Cicero, Marcus Tullius
10643 B.C., Roman statesman, orator, essayist, and letter writer. He was
important not so much for formulating individual philosophical arguments as for
expositions of the doctrines of the major schools of Hellenistic philosophy,
and for, as he put it, “teaching philosophy to speak Latin.” The significance
of the latter can hardly be overestimated. Cicero’s coinages helped shape the
philosophical vocabulary of the Latin-speaking West well into the early modern
period. The most characteristic feature of Cicero’s thought is his attempt to
unify philosophy and rhetoric. His first major trilogy, On the Orator, On the
Republic, and On the Laws, presents a vision of wise statesmen-philosophers
whose greatest achievement is guiding political affairs through rhetorical
persuasion rather than violence. Philosophy, Cicero argues, needs rhetoric to
effect its most important practical goals, while rhetoric is useless without
the psychological, moral, and logical justification provided by philosophy.
This combination of eloquence and philosophy constitutes what he calls
humanitas a coinage whose enduring
influence is attested in later revivals of humanism and it alone provides the foundation for
constitutional governments; it is acquired, moreover, only through broad
training in those subjects worthy of free citizens artes liberales. In
philosophy of education, this Ciceronian conception of a humane education
encompassing poetry, rhetoric, history, morals, and politics endured as an
ideal, especially for those convinced that instruction in the liberal
disciplines is essential for citizens if their rational autonomy is to be expressed
in ways that are culturally and politically beneficial. A major aim of Cicero’s
earlier works is to appropriate for Roman high culture one of Greece’s most
distinctive products, philosophical theory, and to demonstrate Roman
superiority. He thus insists that Rome’s laws and political institutions
successfully embody the best in Grecian political theory, whereas the Grecians
themselves were inadequate to the crucial task of putting their theories into
practice. Taking over the Stoic conception of the universe as a rational whole,
governed by divine reason, he argues that human societies must be grounded in
natural law. For Cicero, nature’s law possesses the characteristics of a legal
code; in particular, it is formulable in a comparatively extended set of rules
against which existing societal institutions can be measured. Indeed, since
they so closely mirror the requirements of nature, Roman laws and institutions
furnish a nearly perfect paradigm for human societies. Cicero’s overall theory,
if not its particular details, established a lasting framework for
anti-positivist theories of law and morality, including those of Aquinas,
Grotius, Suárez, and Locke. The final two years of his life saw the creation of
a series of dialogue-treatises that provide an encyclopedic survey of
Hellenistic philosophy. Cicero himself follows the moderate fallibilism of
Philo of Larissa and the New Academy. Holding that philosophy is a method and
not a set of dogmas, he endorses an attitude of systematic doubt. However, unlike
Cartesian doubt, Cicero’s does not extend to the real world behind phenomena,
since he does not envision the possibility of strict phenomenalism. Nor does he
believe that systematic doubt leads to radical skepticism about knowledge.
Although no infallible criterion for distinguishing true from false impressions
is available, some impressions, he argues, are more “persuasive” probabile and
can be relied on to guide action. In Academics he offers detailed accounts of
Hellenistic epistemological debates, steering a middle course between dogmatism
and radical skepticism. A similar strategy governs the rest of his later
writings. Cicero presents the views of the major schools, submits them to
criticism, and tentatively supports any positions he finds “persuasive.” Three
connected works, On Divination, On Fate, and On the Nature of the Gods, survey
Epicurean, Stoic, and Academic arguments about theology and natural philosophy.
Much of the treatment of religious thought and practice is cool, witty, and
skeptically detached much in the manner
of eighteenth-century philosophes who, along with Hume, found much in Cicero to
emulate. However, he concedes that Stoic arguments for providence are
“persuasive.” So too in ethics, he criticizes Epicurean, Stoic, and Peripatetic
doctrines in On Ends 45 and their views on death, pain, irrational emotions,
and happiChurch-Turing thesis Cicero, Marcus Tullius 143 143 ness in Tusculan Disputations 45. Yet, a
final work, On Duties, offers a practical ethical system based on Stoic
principles. Although sometimes dismissed as the eclecticism of an amateur,
Cicero’s method of selectively choosing from what had become authoritative
professional systems often displays considerable reflectiveness and
originality.
circular reasoning, reasoning
that, when traced backward from its conclusion, returns to that starting point,
as one returns to a starting point when tracing a circle. The discussion of
this topic by Richard Whatley 17871863 in his Logic 1826 sets a high standard
of clarity and penetration. Logic textbooks often quote the following example
from Whatley: To allow every man an unbounded freedom of speech must always be,
on the whole, advantageous to the State; for it is highly conducive to the
interests of the Community, that each individual should enjoy a liberty
perfectly unlimited, of expressing his sentiments. This passage illustrates how
circular reasoning is less obvious in a language, such as English, that, in
Whatley’s words, is “abounding in synonymous expressions, which have no
resemblance in sound, and no connection in etymology.” The premise and
conclusion do not consist of just the same words in the same order, nor can
logical or grammatical principles transform one into the other. Rather, they
have the same propositional content: they say the same thing in different
words. That is why appealing to one of them to provide reason for believing the
other amounts to giving something as a reason for itself. Circular reasoning is
often said to beg the question. ‘Begging the question’ and petitio principii
are translations of a phrase in Aristotle connected with a game of formal
disputation played in antiquity but not in recent times. The meanings of
‘question’ and ‘begging’ do not in any clear way determine the meaning of ‘question
begging’. There is no simple argument form that all and only circular arguments
have. It is not logic, in Whatley’s example above, that determines the identity
of content between the premise and the conclusion. Some theorists propose
rather more complicated formal or syntactic accounts of circularity. Others
believe that any account of circular reasoning must refer to the beliefs of
those who reason. Whether or not the following argument about articles in this
dictionary is circular depends on why the first premise should be accepted: 1
The article on inference contains no split infinitives. 2 The other articles
contain no split infinitives. Therefore, 3 No article contains split
infinitives. Consider two cases. Case I: Although 2 supports 1 inductively,
both 1 and 2 have solid outside support independent of any prior acceptance of
3. This reasoning is not circular. Case II: Someone who advances the argument
accepts 1 or 2 or both, only because he believes 3. Such reasoning is circular,
even though neither premise expresses just the same proposition as the
conclusion. The question remains controversial whether, in explaining
circularity, we should refer to the beliefs of individual reasoners or only to
the surrounding circumstances. One purpose of reasoning is to increase the
degree of reasonable confidence that one has in the truth of a conclusion.
Presuming the truth of a conclusion in support of a premise thwarts this
purpose, because the initial degree of reasonable confidence in the premise
cannot then exceed the initial degree of reasonable confidence in the
conclusion.
citta-matra, the Yogacara
Buddhist doctrine that there are no extramental entities, given classical
expression by Vasubandhu in the fourth or fifth century A.D. The classical form
of this doctrine is a variety of idealism that claims 1 that a coherent
explanation of the facts of experience can be provided without appeal to
anything extramental; 2 that no coherent account of what extramental entities
are like is possible; and 3 that therefore the doctrine that there is nothing
but mind is to be preferred to its realistic competitors. The claim and the
argument were and are controversial among Buddhist metaphysicians.
VIJÑAPTI. P.J.G. civic
humanism.CLASSICAL REPUBLICANISM. civil disobedience, a deliberate violation of
the law, committed in order to draw attention to or circularity civil
disobedience 144 144 rectify perceived
injustices in the law or policies of a state. Illustrative questions raised by
the topic include: how are such acts justified, how should the legal system
respond to such acts when justified, and must such acts be done publicly,
nonviolently, and/or with a willingness to accept attendant legal
sanctions?
Clarke, Samuel 16751729,
English philosopher, preacher, and theologian. Born in Norwich, he was educated
at Cambridge, where he came under the influence of Newton. Upon graduation
Clarke entered the established church, serving for a time as chaplain to Queen
Anne. He spent the last twenty years of his life as rector of St. James,
Westminster. Clarke wrote extensively on controversial theological and
philosophical issues the nature of space
and time, proofs of the existence of God, the doctrine of the Trinity, the
incorporeality and natural immortality of the soul, freedom of the will, the
nature of morality, etc. His most philosophical works are his Boyle lectures of
1704 and 1705, in which he developed a forceful version of the cosmological
argument for the existence and nature of God and attacked the views of Hobbes,
Spinoza, and some proponents of deism; his correspondence with Leibniz 171516,
in which he defended Newton’s views of space and time and charged Leibniz with
holding views inconsistent with free will; and his writings against Anthony
Collins, in which he defended a libertarian view of the agent as the
undetermined cause of free actions and attacked Collins’s arguments for a
materialistic view of the mind. In these works Clarke maintains a position of
extreme rationalism, contending that the existence and nature of God can be
conclusively demonstrated, that the basic principles of morality are
necessarily true and immediately knowable, and that the existence of a future
state of rewards and punishments is assured by our knowledge that God will
reward the morally just and punish the morally wicked.
class, term sometimes
used as a synonym for ‘set’. When the two are distinguished, a class is
understood as a collection in the logical sense, i.e., as the extension of a
concept e.g. the class of red objects. By contrast, sets, i.e., collections in
the mathematical sense, are understood as occurring in stages, where each stage
consists of the sets that can be formed from the non-sets and the sets already
formed at previous stages. When a set is formed at a given stage, only the
non-sets and the previously formed sets are even candidates for membership, but
absolutely anything can gain membership in a class simply by falling under the
appropriate concept. Thus, it is classes, not sets, that figure in the inconsistent
principle of unlimited comprehension. In set theory, proper classes are
collections of sets that are never formed at any stage, e.g., the class of all
sets since new sets are formed at each stage, there is no stage at which all
sets are available to be collected into a set.
classical republicanism,
also known as civic humanism, a political outlook developed by Machiavelli in
Renaissance Italy and by James Harrington 161177 in seventeenth-century
England, modified by eighteenth-century British and Continental writers and
important for the thought of the American founding fathers. Drawing on Roman
historians, Machiavelli argued that a state could hope for security from the
blows of fortune only if its male citizens were devoted to its well-being. They
should take turns ruling and being ruled, be always prepared to fight for the
republic, and limit their private possessions. Such men would possess a wholly
secular virtù appropriate to political beings. Corruption, in the form of
excessive attachment to private interest, would then be the most serious threat
to the republic. Harrington’s utopian Oceana 1656 portrayed England governed
under such a system. Opposing the authoritarian views of Hobbes, it described a
system in which the well-to-do male citizens would elect some of their number
to govern for limited terms. Those governing would propose state policies; the
others would vote on the acceptability of the proposals. Agriculture was the
basis of economics, civil rights classical republicanism 145 145 but the size of estates was to be
strictly controlled. Harringtonianism helped form the views of the political
party opposing the dominance of the king and court. Montesquieu in France drew
on classical sources in discussing the importance of civic virtue and devotion
to the republic. All these views were well known to Jefferson, Adams, and other
American colonial and revolutionary thinkers; and some contemporary
communitarian critics of American culture return to classical republican ideas.
Clement of Alexandria
A.D. c.150c.215, formative teacher in the early Christian church who, as a
“Christian gnostic,” combined enthusiasm for Grecian philosophy with a defense
of the church’s faith. He espoused spiritual and intellectual ascent toward
that complete but hidden knowledge or gnosis reserved for the truly
enlightened. Clement’s school did not practice strict fidelity to the
authorities, and possibly the teachings, of the institutional church, drawing
upon the Hellenistic traditions of Alexandria, including Philo and Middle
Platonism. As with the law among the Jews, so, for Clement, philosophy among
the pagans was a pedagogical preparation for Christ, in whom logos, reason, had
become enfleshed. Philosophers now should rise above their inferior understanding
to the perfect knowledge revealed in Christ. Though hostile to gnosticism and
its speculations, Clement was thoroughly Hellenized in outlook and sometimes
guilty of Docetism, not least in his reluctance to concede the utter humanness
of Jesus.
Clifford, William Kingdon
184579, British mathematician and philosopher. Educated at King’s , London, and
Trinity , Cambridge, he began giving public lectures in 1868, when he was
appointed a fellow of Trinity, and in 1870 became professor of applied
mathematics at , London. His academic
career ended prematurely when he died of tuberculosis. Clifford is best known
for his rigorous view on the relation between belief and evidence, which, in
“The Ethics of Belief,” he summarized thus: “It is wrong always, everywhere,
and for anyone, to believe anything on insufficient evidence.” He gives this
example. Imagine a shipowner who sends to sea an emigrant ship, although the
evidence raises strong suspicions as to the vessel’s seaworthiness. Ignoring
this evidence, he convinces himself that the ship’s condition is good enough
and, after it sinks and all the passengers die, collects his insurance money
without a trace of guilt. Clifford maintains that the owner had no right to
believe in the soundness of the ship. “He had acquired his belief not by
honestly earning it in patient investigation, but by stifling his doubts.” The
right Clifford is alluding to is moral, for what one believes is not a private
but a public affair and may have grave consequences for others. He regards us
as morally obliged to investigate the evidence thoroughly on any occasion, and
to withhold belief if evidential support is lacking. This obligation must be
fulfilled however trivial and insignificant a belief may seem, for a violation
of it may “leave its stamp upon our character forever.” Clifford thus rejected
Catholicism, to which he had subscribed originally, and became an agnostic.
James’s famous essay “The Will to Believe” criticizes Clifford’s view.
According to James, insufficient evidence need not stand in the way of
religious belief, for we have a right to hold beliefs that go beyond the
evidence provided they serve the pursuit of a legitimate goal.
closure. A set of
objects, O, is said to exhibit closure or to be closed under a given operation,
R, provided that for every object, x, if x is a member of O and x is R-related
to any object, y, then y is a member of O. For example, the set of propositions
is closed under deduction, for if p is a proposition and p entails q, i.e., q
is deducible from p, then q is a proposition simply because only propositions
can be entailed by propositions. In addition, many subsets of the set of
propositions are also closed under deduction. For example, the set of true
propositions is closed under deduction or entailment. Others are not. Under
most accounts of belief, we may fail to believe what is entailed by what we do,
in fact, believe. Thus, if knowledge is some form of class paradox closure
146 146 true, justified belief,
knowledge is not closed under deduction, for we may fail to believe a
proposition entailed by a known proposition. Nevertheless, there is a related
issue that has been the subject of much debate, namely: Is the set of justified
propositions closed under deduction? Aside from the obvious importance of the
answer to that question in developing an account of justification, there are
two important issues in epistemology that also depend on the answer. Subtleties
aside, the so-called Gettier problem depends in large part upon an affirmative
answer to that question. For, assuming that a proposition can be justified and
false, it is possible to construct cases in which a proposition, say p, is
justified, false, but believed. Now, consider a true proposition, q, which is
believed and entailed by p. If justification is closed under deduction, then q
is justified, true, and believed. But if the only basis for believing q is p,
it is clear that q is not known. Thus, true, justified belief is not sufficient
for knowledge. What response is appropriate to this problem has been a central
issue in epistemology since E. Gettier’s publication of “Is Justified True
Belief Knowledge?” Analysis, 1963. Whether justification is closed under
deduction is also crucial when evaluating a common, traditional argument for skepticism.
Consider any person, S, and let p be any proposition ordinarily thought to be
knowable, e.g., that there is a table before S. The argument for skepticism
goes like this: 1 If p is justified for S, then, since p entails q, where q is
‘there is no evil genius making S falsely believe that p’, q is justified for
S. 2 S is not justified in believing q. Therefore, S is not justified in
believing p. The first premise depends upon justification being closed under
deduction.
Coase theorem, a
non-formal insight by Ronald Coase Nobel Prize in Economics, 1991: assuming
that there are no transaction costs involved in exchanging rights for money,
then no matter how rights are initially distributed, rational agents will buy
and sell them so as to maximize individual returns. In jurisprudence this
proposition has been the basis for a claim about how rights should be
distributed even when as is usual transaction costs are high: the law should
confer rights on those who would purchase them were they for sale on markets
without transaction costs; e.g., the right to an indivisible, unsharable
resource should be conferred on the agent willing to pay the highest price for
it.
PHILOSOPHY OF ECONOMICS.
A.R.
Cockburn, Catherine
Trotter 16791749, English philosopher and playwright who made a significant
contribution to the debates on ethical rationalism sparked by Clarke’s Boyle
lectures 170405. The major theme of her writings is the nature of moral
obligation. Cockburn displays a consistent, non-doctrinaire philosophical
position, arguing that moral duty is to be rationally deduced from the “nature
and fitness of things” Remarks, 1747 and is not founded primarily in externally
imposed sanctions. Her writings, published anonymously, take the form of
philosophical debates with others, including Samuel Rutherforth, William
Warburton, Isaac Watts, Francis Hutcheson, and Lord Shaftesbury. Her best-known
intervention in contemporary philosophical debate was her able defense of
Locke’s Essay in 1702. S.H. coercion.FREE WILL PROBLEM. cogito argument.
DESCARTES. Cogito ergo
sum Latin, ‘I think, therefore I am’, the starting point of Descartes’s system
of knowledge. In his Discourse on the Method 1637, he observes that the
proposition ‘I am thinking, therefore I exist’ je pense, donc je suis is “so
firm and sure that the most extravagant suppositions of the skeptics were
incapable of shaking it.” The celebrated phrase, in its better-known Latin
version, also occurs in the Principles of Philosophy 1644, but is not to be
found in the Meditations 1641, though the latter contains the fullest statement
of the reasoning behind Descartes’s certainty of his own existence.
cognitive dissonance,
mental discomfort arising from conflicting beliefs or attitudes held
simultaneously. Leon Festinger, who originated the theory of cognitive
dissonance in a book of that title 1957, suggested that cognitive dissonance
has motivational characteristics. Suppose a person is contemplating moving to a
new city. She Coase theorem cognitive dissonance 147 147 is considering both Birmingham and
Boston. She cannot move to both, so she must choose. Dissonance is experienced
by the person if in choosing, say, Birmingham, she acquires knowledge of bad or
unwelcome features of Birmingham and of good or welcome aspects of Boston. The
amount of dissonance depends on the relative intensities of dissonant elements.
Hence, if the only dissonant factor is her learning that Boston is cooler than
Birmingham, and she does not regard climate as important, she will experience
little dissonance. Dissonance may occur in several sorts of psychological
states or processes, although the bulk of research in cognitive dissonance
theory has been on dissonance in choice and on the justification and
psychological aftereffects of choice. Cognitive dissonance may be involved in
two phenomena of interest to philosophers, namely, self-deception and weakness
of will. Why do self-deceivers try to get themselves to believe something that,
in some sense, they know to be false? One may resort to self-deception when
knowledge causes dissonance. Why do the weak-willed perform actions they know
to be wrong? One may become weak-willed when dissonance arises from the
expected consequences of doing the right thing.
G.A.G. cognitive meaning.
MEANING. cognitive
psychology.
cognitive psychotherapy,
an expression introduced by Brandt in A Theory of the Good and the Right 1979
to refer to a process of assessing and adjusting one’s desires, aversions, or
pleasures henceforth, “attitudes”. This process is central to Brandt’s analysis
of rationality, and ultimately, to his view on the justification of morality.
Cognitive psychotherapy consists of the agent’s criticizing his attitudes by
repeatedly representing to himself, in an ideally vivid way and at appropriate
times, all relevant available information. Brandt characterizes the key
definiens as follows: 1 available information is “propositions accepted by the
science of the agent’s day, plus factual propositions justified by publicly
accessible evidence including testimony of others about themselves and the
principles of logic”; 2 information is relevant provided, if the agent were to
reflect repeatedly on it, “it would make a difference,” i.e., would affect the
attitude in question, and the effect would be a function of its content, not an
accidental byproduct; 3 relevant information is represented in an ideally vivid
way when the agent focuses on it with maximal clarity and detail and with no
hesitation or doubt about its truth; and 4 repeatedly and at appropriate times
refer, respectively, to the frequency and occasions that would result in the
information’s having the maximal attitudinal impact. Suppose Mary’s desire to
smoke were extinguished by her bringing to the focus of her attention, whenever
she was about to inhale smoke, some justified beliefs, say that smoking is
hazardous to one’s health and may cause lung cancer; Mary’s desire would have
been removed by cognitive psychotherapy. According to Brandt, an attitude is
rational for a person provided it is one that would survive, or be produced by,
cognitive psychotherapy; otherwise it is irrational. Rational attitudes, in
this sense, provide a basis for moral norms. Roughly, the correct moral norms
are those of a moral code that persons would opt for if i they were motivated
by attitudes that survive the process of cognitive psychotherapy; and ii at the
time of opting for a moral code, they were fully aware of, and vividly
attentive to, all available information relevant to choosing a moral code for a
society in which they are to live for the rest of their lives. In this way,
Brandt seeks a value-free justification for moral norms one that avoids the problems of other
theories such as those that make an appeal to intuitions.
ETHICS, INSTRUMENTALISM,
INTUITION, RATIONALITY. Y.Y.
cognitive science, an
interdisciplinary research cluster that seeks to account for intelligent
activity, whether exhibited by living organisms especially adult humans or
machines. Hence, cognitive psychology and artificial intelligence constitute
its core. A number of other disciplines, including neuroscience, linguistics,
anthropology, and philosophy, as well as other fields of psychology e.g.,
developmental psychology, are more peripheral contributors. The quintessential
cognitive scientist is someone who employs computer modeling techniques
developing computer programs for the purpose of simulating particular human
cognitive activities, but the broad range of disciplines that are at least
peripherally constitutive of cognitive science have lent a variety of research
strategies to the enterprise. While there are a few common institutions that
seek to unify cognitive science e.g., departments, journals, and societies, the
problems investigated and the methods of investigation often are limited to a
single contributing discipline. Thus, it is more appropriate to view cognitive
science as a cross-disciplinary enterprise than as itself a new discipline.
While interest in cognitive phenomena has historically played a central role in
the various disciplines contributing to cognitive science, the term properly
applies to cross-disciplinary activities that emerged in the 1970s. During the
preceding two decades each of the disciplines that became part of cogntive
science gradually broke free of positivistic and behavioristic proscriptions
that barred systematic inquiry into the operation of the mind. One of the
primary factors that catalyzed new investigations of cognitive activities was
Chomsky’s generative grammar, which he advanced not only as an abstract theory
of the structure of language, but also as an account of language users’ mental
knowledge of language their linguistic competence. A more fundamental factor
was the development of approaches for theorizing about information in an abstract
manner, and the introduction of machines computers that could manipulate
information. This gave rise to the idea that one might program a computer to
process information so as to exhibit behavior that would, if performed by a
human, require intelligence. If one tried to formulate a unifying question
guiding cognitive science research, it would probably be: How does the
cognitive system work? But even this common question is interpreted quite
differently in different disciplines. We can appreciate these differences by
looking just at language. While psycholinguists generally psychologists seek to
identify the processing activities in the mind that underlie language use, most
linguists focus on the products of this internal processing, seeking to articulate
the abstract structure of language. A frequent goal of computer scientists, in
contrast, has been to develop computer programs to parse natural language input
and produce appropriate syntactic and semantic representations. These
differences in objectives among the cognitive science disciplines correlate
with different methodologies. The following represent some of the major
methodological approaches of the contributing disciplines and some of the
problems each encounters. Artificial intelligence. If the human cognition
system is viewed as computational, a natural goal is to simulate its
performance. This typically requires formats for representing information as
well as procedures for searching and manipulating it. Some of the earliest
AIprograms drew heavily on the resources of first-order predicate calculus,
representing information in propositional formats and manipulating it according
to logical principles. For many modeling endeavors, however, it proved
important to represent information in larger-scale structures, such as frames
Marvin Minsky, schemata David Rumelhart, or scripts Roger Schank, in which
different pieces of information associated with an object or activity would be
stored together. Such structures generally employed default values for specific
slots specifying, e.g., that deer live in forests that would be part of the
representation unless overridden by new information e.g., that a particular
deer lives in the San Diego Zoo. A very influential alternative approach,
developed by Allen Newell, replaces declarative representations of information
with procedural representations, known as productions. These productions take
the form of conditionals that specify actions to be performed e.g., copying an
expression into working memory if certain conditions are satisfied e.g., the
expression matches another expression. Psychology. While some psychologists
develop computer simulations, a more characteristic activity is to acquire
detailed data from human subjects that can reveal the cognitive system’s actual
operation. This is a challenging endeavor. While cognitive activities transpire
within us, they frequently do so in such a smooth and rapid fashion that we are
unaware of them. For example, we have little awareness of what occurs when we
recognize an object as a chair or remember the name of a client. Some cognitive
functions, though, seem to be transparent to consciousness. For example, we
might approach a logic problem systematically, enumerating possible solutions
and evaluating them serially. Allen Newell and Herbert Simon have refined
methods for exploiting verbal protocols obtained from subjects as they solve
such problems. These methods have been quite fruitful, but their limitations
must be respected. In many cases in which we think we know how we performed a
cognitive task, Richard Nisbett and Timothy Wilson have argued that we are
misled, relying on folk theories to describe how our minds work rather than
reporting directly on their operation. In most cases cognitive psychologists
cannot rely on conscious awareness of cognitive processes, but must proceed as
do physiologists trying to understand metabolism: they must devise experiments
that reveal the underlying processes operative in cognition. One approach is to
seek clues in the errors to which the cognitive system cognitive science
cognitive science is prone. Such errors might be more easily accounted for by
one kind of underlying process than by another. Speech errors, such as
substituting ‘bat cad’ for ‘bad cat’, may be diagnostic of the mechanisms used
to construct speech. This approach is often combined with strategies that seek
to overload or disrupt the system’s normal operation. A common technique is to
have a subject perform two tasks at once
e.g., read a passage while watching for a colored spot. Cognitive
psychologists may also rely on the ability to dissociate two phenomena e.g.,
obliterate one while maintaining the other to establish their independence.
Other types of data widely used to make inferences about the cognitive system
include patterns of reaction times, error rates, and priming effects in which
activation of one item facilitates access to related items. Finally,
developmental psychologists have brought a variety of kinds of data to bear on
cognitive science issues. For example, patterns of acquisition times have been
used in a manner similar to reaction time patterns, and accounts of the origin
and development of systems constrain and elucidate mature systems. Linguistics.
Since linguists focus on a product of cognition rather than the processes that
produce the product, they tend to test their analyses directly against our
shared knowledge of that product. Generative linguists in the tradition of
Chomsky, for instance, develop grammars that they test by probing whether they
generate the sentences of the language and no others. While grammars are
certainly germane to developing processing models, they do not directly
determine the structure of processing models. Hence, the central task of
linguistics is not central to cognitive science. However, Chomsky has augmented
his work on grammatical description with a number of controversial claims that
are psycholinguistic in nature e.g., his nativism and his notion of linguistic
competence. Further, an alternative approach to incorporating psycholinguistic
concerns, the cognitive linguistics of Lakoff and Langacker, has achieved
prominence as a contributor to cognitive science. Neuroscience. Cognitive
scientists have generally assumed that the processes they study are carried out,
in humans, by the brain. Until recently, however, neuroscience has been
relatively peripheral to cognitive science. In part this is because
neuroscientists have been chiefly concerned with the implementation of
processes, rather than the processes themselves, and in part because the
techniques available to neuroscientists such as single-cell recording have been
most suitable for studying the neural implementation of lower-order processes
such as sensation. A prominent exception was the classical studies of brain
lesions initiated by Broca and Wernicke, which seemed to show that the location
of lesions correlated with deficits in production versus comprehension of
speech. More recent data suggest that lesions in Broca’s area impair certain
kinds of syntactic processing. However, other developments in neuroscience
promise to make its data more relevant to cognitive modeling in the future.
These include studies of simple nervous systems, such as that of the aplysia a
genus of marine mollusk by Eric Kandel, and the development of a variety of
techniques for determining the brain activities involved in the performance of
cognitive tasks e.g., recording of evoked response potentials over larger brain
structures, and imaging techniques such as positron emission tomography. While
in the future neuroscience is likely to offer much richer information that will
guide the development and constrain the character of cognitive models,
neuroscience will probably not become central to cognitive science. It is
itself a rich, multidisciplinary research cluster whose contributing
disciplines employ a host of complicated research tools. Moreover, the focus of
cognitive science can be expected to remain on cognition, not on its
implementation. So far cognitive science has been characterized in terms of its
modes of inquiry. One can also focus on the domains of cognitive phenomena that
have been explored. Language represents one such domain. Syntax was one of the
first domains to attract wide attention in cognitive science. For example,
shortly after Chomsky introduced his transformational grammar, psychologists
such as George Miller sought evidence that transformations figured directly in
human language processing. From this beginning, a more complex but enduring
relationship among linguists, psychologists, and computer scientists has formed
a leading edge for much cognitive science research. Psycholinguistics has
matured; sophisticated computer models of natural language processing have been
developed; and cognitive linguists have offered a particular synthesis that
emphasizes semantics, pragmatics, and cognitive foundations of language.
Thinking and reasoning. These constitute an important domain of cognitive
science that is closely linked to philosophical interests. Problem cognitive
science cognitive science solving, such as that which figures in solving
puzzles, playing games, or serving as an expert in a domain, has provided a
prototype for thinking. Newell and Simon’s influential work construed problem
solving as a search through a problem space and introduced the idea of
heuristics generally reliable but
fallible simplifying devices to facilitate the search. One arena for problem
solving, scientific reasoning and discovery, has particularly interested
philosophers. Artificial intelligence researchers such as Simon and Patrick
Langley, as well as philosophers such as Paul Thagard and Lindley Darden, have
developed computer programs that can utilize the same data as that available to
historical scientists to develop and evaluate theories and plan future
experiments. Cognitive scientists have also sought to study the cognitive
processes underlying the sorts of logical reasoning both deductive and
inductive whose normative dimensions have been a concern of philosophers.
Philip JohnsonLaird, for example, has sought to account for human performance
in dealing with syllogistic reasoning by describing a processing of
constructing and manipulating mental models. Finally, the process of
constructing and using analogies is another aspect of reasoning that has been
extensively studied by traditional philosophers as well as cognitive
scientists. Memory, attention, and learning. Cognitive scientists have
differentiated a variety of types of memory. The distinction between long- and
short-term memory was very influential in the information-processing models of
the 1970s. Short-term memory was characterized by limited capacity, such as
that exhibited by the ability to retain a seven-digit telephone number for a
short period. In much cognitive science work, the notion of working memory has
superseded short-term memory, but many theorists are reluctant to construe this
as a separate memory system as opposed to a part of long-term memory that is
activated at a given time. Endel Tulving introduced a distinction between
semantic memory general knowledge that is not specific to a time or place and
episodic memory memory for particular episodes or occurrences. More recently,
Daniel Schacter proposed a related distinction that emphasizes consciousness:
implicit memory access without awareness versus explicit memory which does
involve awareness and is similar to episodic memory. One of the interesting
results of cognitive research is the dissociation between different kinds of
memory: a person might have severely impaired memory of recent events while
having largely unimpaired implicit memory. More generally, memory research has
shown that human memory does not simply store away information as in a file
cabinet. Rather, information is organized according to preexisting structures
such as scripts, and can be influenced by events subsequent to the initial
storage. Exactly what gets stored and retrieved is partly determined by
attention, and psychologists in the information-processing tradition have
sought to construct general cognitive models that emphasize memory and
attention. Finally, the topic of learning has once again become prominent.
Extensively studied by the behaviorists of the precognitive era, learning was
superseded by memory and attention as a research focus in the 1970s. In the
1980s, artificial intelligence researchers developed a growing interest in
designing systems that can learn; machine learning is now a major problem area
in AI. During the same period, connectionism arose to offer an alternative kind
of learning model. Perception and motor control. Perceptual and motor systems
provide the inputs and outputs to cognitive systems. An important aspect of
perception is the recognition of something as a particular kind of object or
event; this requires accessing knowledge of objects and events. One of the
central issues concerning perception questions the extent to which perceptual
processes are influenced by higher-level cognitive information top-down
processing versus how much they are driven purely by incoming sensory
information bottom-up processing. A related issue concerns the claim that
visual imagery is a distinct cognitive process and is closely related to visual
perception, perhaps relying on the same brain processes. A number of cognitive
science inquiries e.g., by Roger Shepard and Stephen Kosslyn have focused on
how people use images in problem solving and have sought evidence that people
solve problems by rotating images or scanning them. This research has been
extremely controversial, as other investigators have argued against the use of
images and have tried to account for the performance data that have been
generated in terms of the use of propositionally represented information.
Finally, a distinction recently has been proposed between the What and Where
systems. All of the foregoing issues concern the What system which recognizes
and represents objects as exemplars of categories. The Where system, in
contrast, concerns objects in their environment, and is particularly adapted to
the dynamics of movement. Gibson’s ecological psychology is a long-standing
inquiry into this aspect of perception, and work on the neural substrates is
now attracting the interest of cognitive scientists as well. Recent
developments. The breadth of cognitive science has been expanding in recent
years. In the 1970s, cognitive science inquiries tended to focus on processing
activities of adult humans or on computer models of intelligent performance;
the best work often combined these approaches. Subsequently, investigators
examined in much greater detail how cognitive systems develop, and
developmental psychologists have increasingly contributed to cognitive science.
One of the surprising findings has been that, contrary to the claims of William
James, infants do not seem to confront the world as a “blooming, buzzing
confusion,” but rather recognize objects and events quite early in life.
Cognitive science has also expanded along a different dimension. Until recently
many cognitive studies focused on what humans could accomplish in laboratory
settings in which they performed tasks isolated from reallife contexts. The
motivation for this was the assumption that cognitive processes were generic
and not limited to specific contexts. However, a variety of influences, including
Gibsonian ecological psychology especially as interpreted and developed by
Ulric Neisser and Soviet activity theory, have advanced the view that cognition
is much more dynamic and situated in real-world tasks and environmental
contexts; hence, it is necessary to study cognitive activities in an
ecologically valid manner. Another form of expansion has resulted from a
challenge to what has been the dominant architecture for modeling cognition. An
architecture defines the basic processing capacities of the cognitive system.
The dominant cognitive architecture has assumed that the mind possesses a
capacity for storing and manipulating symbols. These symbols can be composed
into larger structures according to syntactic rules that can then be operated
upon by formal rules that recognize that structure. Jerry Fodor has referred to
this view of the cognitive system as the “language of thought hypothesis” and
clearly construes it as a modern heir of rationalism. One of the basic
arguments for it, due to Fodor and Zenon Pylyshyn, is that thoughts, like
language, exhibit productivity the unlimited capacity to generate new thoughts
and systematicity exhibited by the inherent relation between thoughts such as
‘Joan loves the florist’ and ‘The florist loves Joan’. They argue that only if
the architecture of cognition has languagelike compositional structure would
productivity and systematicity be generic properties and hence not require
special case-by-case accounts. The challenge to this architecture has arisen
with the development of an alternative architecture, known as connectionism,
parallel distributed processing, or neural network modeling, which proposes
that the cognitive system consists of vast numbers of neuronlike units that
excite or inhibit each other. Knowledge is stored in these systems by the
adjustment of connection strengths between processing units; consequently,
connectionism is a modern descendant of associationism. Connectionist networks
provide a natural account of certain cognitive phenomena that have proven
challenging for the symbolic architecture, including pattern recognition,
reasoning with soft constraints, and learning. Whether they also can account
for productivity and systematicity has been the subject of debate.
Philosophical theorizing about the mind has often provided a starting point for
the modeling and empirical investigations of modern cognitive science. The
ascent of cognitive science has not meant that philosophers have ceased to play
a role in examining cognition. Indeed, a number of philosophers have pursued
their inquiries as contributors to cognitive science, focusing on such issues
as the possible reduction of cognitive theories to those of neuroscience, the
status of folk psychology relative to emerging scientific theories of mind, the
merits of rationalism versus empiricism, and strategies for accounting for the
intentionality of mental states. The interaction between philosophers and other
cognitive scientists, however, is bidirectional, and a number of developments
in cognitive science promise to challenge or modify traditional philosophical
views of cognition. For example, studies by cognitive and social psychologists
have challenged the assumption that human thinking tends to accord with the
norms of logic and decision theory. On a variety of tasks humans seem to follow
procedures heuristics that violate normative canons, raising questions about
how philosophers should characterize rationality. Another area of empirical
study that has challenged philosophical assumptions has been the study of
concepts and categorization. Philosophers since Plato have widely assumed that
concepts of ordinary language, such as red, bird, and justice, should be
definable by necessary and sufficient conditions. But celebrated studies by
cognitive science cognitive science 152
152 Eleanor Rosch and her colleagues indicated that many
ordinary-language concepts had a prototype structure instead. On this view, the
categories employed in human thinking are characterized by prototypes the
clearest exemplars and a metric that grades exemplars according to their degree
of typicality. Recent investigations have also pointed to significant
instability in conceptual structure and to the role of theoretical beliefs in
organizing categories. This alternative conception of concepts has profound
implications for philosophical methodologies that portray philosophy’s task to
be the analysis of concepts.
Cohen, Hermann 18421918, Jewish
philosopher who originated and led, with Paul Natorp 18541924, the Marburg School
of neo-Kantianism. He taught at Marburg from 1876 to 1912. Cohen wrote
commentaries on Kant’s Critiques prior to publishing System der Philosophie
190212, which consisted of parts on logic, ethics, and aesthetics. He developed
a Kantian idealism of the natural sciences, arguing that a transcendental
analysis of these sciences shows that “pure thought” his system of Kantian a
priori principles “constructs” their “reality.” He also developed Kant’s ethics
as a democratic socialist ethics. He ended his career at a rabbinical seminary
in Berlin, writing his influential Religion der Vernunft aus den Quellen des
Judentums “Religion of Reason out of the Sources of Judaism,” 1919, which
explicated Judaism on the basis of his own Kantian ethical idealism. Cohen’s
ethical-political views were adopted by Kurt Eisner 18671919, leader of the
Munich revolution of 1918, and also had an impact on the revisionism of
orthodox Marxism of the German Social Democratic Party, while his philosophical
writings greatly influenced Cassirer.
CASSIRER, KANT,
NEOKANTIANISM. H.v.d.L. coherence theory of justification.COHERENTISM.
coherence theory of knowledge.COHERENTISM.
coherence theory of
truth, the view that either the nature of truth or the sole criterion for
determining truth is constituted by a relation of coherence between the belief
or judgment being assessed and other beliefs or judgments. As a view of the
nature of truth, the coherence theory represents an alternative to the
correspondence theory of truth. Whereas the correspondence theory holds that a
belief is true provided it corresponds to independent reality, the coherence
theory holds that it is true provided it stands in a suitably strong relation
of coherence to other beliefs, so that the believer’s total system of beliefs
forms a highly or perhaps perfectly coherent system. Since, on such a
characterization, truth depends entirely on the internal relations within the
system of beliefs, such a conception of truth seems to lead at once to idealism
as regards the nature of reality, and its main advocates have been proponents
of absolute idealism mainly Bradley, Bosanquet, and Brand Blanshard. A less
explicitly metaphysical version of the coherence theory was also held by
certain members of the school of logical positivism mainly Otto Neurath and
Carl Hempel. The nature of the intended relation of coherence, often
characterized metaphorically in terms of the beliefs in question fitting
together or dovetailing with each other, has been and continues to be a matter
of uncertainty and controversy. Despite occasional misconceptions to the
contrary, it is clear that coherence is intended to be a substantially more
demanding relation than mere consistency, involving such things as inferential
and explanatory relations within the system of beliefs. Perfect or ideal
coherence is sometimes described as requiring that every belief in the system
of beliefs entails all the others though it must be remembered that those
offering such a characterization do not restrict entailments to those that are
formal or analytic in character. Since actual human systems of belief seem
inevitably to fall short of perfect coherence, however that is understood,
their truth is usually held to be only approximate at best, thus leading to the
absolute idealist view that truth admits of degrees. As a view of the criterion
of truth, the coherence theory of truth holds that the sole criterion or
standard for determining whether a belief is true is its coherence with other
beliefs or judgments, with the degree of justification varying with the degree
of coherence. Such a view amounts to a coherence theory of epistemic
justification. It was held by most of the proponents of the coherence theory of
the nature of truth, though usually without distinguishing the two views very
clearly. For philosophers who hold both of these cognitive value coherence
theory of truth 153 153 views, the
thesis that coherence is the sole criterion of truth is usually logically
prior, and the coherence theory of the nature of truth is adopted as a
consequence, the clearest argument being that only the view that perfect or
ideal coherence is the nature of truth can make sense of the appeal to degrees
of coherence as a criterion of truth.
COHERENTISM, IDEALISM,
TRUTH. L.B.
coherentism, in
epistemology, a theory of the structure of knowledge or justified beliefs
according to which all beliefs representing knowledge are known or justified in
virtue of their relations to other beliefs, specifically, in virtue of
belonging to a coherent system of beliefs. Assuming that the orthodox account
of knowledge is correct at least in maintaining that justified true belief is
necessary for knowledge, we can identify two kinds of coherence theories of
knowledge: those that are coherentist merely in virtue of incorporating a
coherence theory of justification, and those that are doubly coherentist
because they account for both justification and truth in terms of coherence.
What follows will focus on coherence theories of justification. Historically,
coherentism is the most significant alternative to foundationalism. The latter
holds that some beliefs, basic or foundational beliefs, are justified apart
from their relations to other beliefs, while all other beliefs derive their
justification from that of foundational beliefs. Foundationalism portrays
justification as having a structure like that of a building, with certain
beliefs serving as the foundations and all other beliefs supported by them.
Coherentism rejects this image and pictures justification as having the
structure of a raft. Justified beliefs, like the planks that make up a raft,
mutually support one another. This picture of the coherence theory is due to
the positivist Otto Neurath. Among the positivists, Hempel shared Neurath’s
sympathy for coherentism. Other defenders of coherentism from the late
nineteenth and early twentieth centuries were idealists, e.g., Bradley,
Bosanquet, and Brand Blanshard. Idealists often held the sort of double
coherence theory mentioned above. The contrast between foundationalism and
coherentism is commonly developed in terms of the regress argument. If we are
asked what justifies one of our beliefs, we characteristically answer by citing
some other belief that supports it, e.g., logically or probabilistically. If we
are asked about this second belief, we are likely to cite a third belief, and
so on. There are three shapes such an evidential chain might have: it could go
on forever, if could eventually end in some belief, or it could loop back upon
itself, i.e., eventually contain again a belief that had occurred “higher up”
on the chain. Assuming that infinite chains are not really possible, we are
left with a choice between chains that end and circular chains. According to
foundationalists, evidential chains must eventually end with a foundational
belief that is justified, if the belief at the beginning of the chain is to be
justified. Coherentists are then portrayed as holding that circular chains can
yield justified beliefs. This portrayal is, in a way, correct. But it is also
misleading since it suggests that the disagreement between coherentism and
foundationalism is best understood as concerning only the structure of
evidential chains. Talk of evidential chains in which beliefs that are further
down on the chain are responsible for beliefs that are higher up naturally
suggests the idea that just as real chains transfer forces, evidential chains
transfer justification. Foundationalism then sounds like a real possibility.
Foundational beliefs already have justification, and evidential chains serve to
pass the justification along to other beliefs. But coherentism seems to be a
nonstarter, for if no belief in the chain is justified to begin with, there is
nothing to pass along. Altering the metaphor, we might say that coherentism
seems about as likely to succeed as a bucket brigade that does not end at a
well, but simply moves around in a circle. The coherentist seeks to dispel this
appearance by pointing out that the primary function of evidential chains is
not to transfer epistemic status, such as justification, from belief to belief.
Indeed, beliefs are not the primary locus of justification. Rather, it is whole
systems of belief that are justified or not in the primary sense; individual
beliefs are justified in virtue of their membership in an appropriately
structured system of beliefs. Accordingly, what the coherentist claims is that
the appropriate sorts of evidential chains, which will be circular indeed, will likely contain numerous
circles constitute justified systems of
belief. The individual beliefs within such a system are themselves justified in
virtue of their place in the entire system and not because this status is
passed on to them from beliefs further down some evidential chain in which they
figure. One can, therefore, view coherentism with considerable accuracy as a
version of foundationalism that holds all beliefs to be foundational. From this
perspective, the difference between coherentism and traditional foundationalism
has to do with coherentism coherentism 154
154 what accounts for the epistemic status of foundational beliefs, with
traditional foundationalism holding that such beliefs can be justified in
various ways, e.g., by perception or reason, while coherentism insists that the
only way such beliefs can be justified is by being a member of an appropriately
structured system of beliefs. One outstanding problem the coherentist faces is
to specify exactly what constitutes a coherent system of beliefs. Coherence
clearly must involve much more than mere absence of mutually contradictory
beliefs. One way in which beliefs can be logically consistent is by concerning
completely unrelated matters, but such a consistent system of beliefs would not
embody the sort of mutual support that constitutes the core idea of
coherentism. Moreover, one might question whether logical consistency is even
necessary for coherence, e.g., on the basis of the preface paradox. Similar
points can be made regarding efforts to begin an account of coherence with the
idea that beliefs and degrees of belief must correspond to the probability
calculus. So although it is difficult to avoid thinking that such formal
features as logical and probabilistic consistency are significantly involved in
coherence, it is not clear exactly how they are involved. An account of
coherence can be drawn more directly from the following intuitive idea: a
coherent system of belief is one in which each belief is epistemically
supported by the others, where various types of epistemic support are recognized,
e.g., deductive or inductive arguments, or inferences to the best explanation.
There are, however, at least two problems this suggestion does not address.
First, since very small sets of beliefs can be mutually supporting, the
coherentist needs to say something about the scope a system of beliefs must
have to exhibit the sort of coherence required for justification. Second, given
the possibility of small sets of mutually supportive beliefs, it is apparently
possible to build a system of very broad scope out of such small sets of
mutually supportive beliefs by mere conjunction, i.e., without forging any
significant support relations among them. Yet, since the interrelatedness of
all truths does not seem discoverable by analyzing the concept of justification,
the coherentist cannot rule out epistemically isolated subsystems of belief
entirely. So the coherentist must say what sorts of isolated subsystems of
belief are compatible with coherence. The difficulties involved in specifying a
more precise concept of coherence should not be pressed too vigorously against
the coherentist. For one thing, most foundationalists have been forced to grant
coherence a significant role within their accounts of justification, so no
dialectical advantage can be gained by pressing them. Moreover, only a little
reflection is needed to see that nearly all the difficulties involved in
specifying coherence are manifestations within a specific context of quite
general philosophical problems concerning such matters as induction, explanation,
theory choice, the nature of epistemic support, etc. They are, then, problems
that are faced by logicians, philosophers of science, and epistemologists quite
generally, regardless of whether they are sympathetic to coherentism.
Coherentism faces a number of serious objections. Since according to
coherentism justification is determined solely by the relations among beliefs,
it does not seem to be capable of taking us outside the circle of our beliefs.
This fact gives rise to complaints that coherentism cannot allow for any input
from external reality, e.g., via perception, and that it can neither guarantee
nor even claim that it is likely that coherent systems of belief will make
contact with such reality or contain true beliefs. And while it is widely
granted that justified false beliefs are possible, it is just as widely
accepted that there is an important connection between justification and truth,
a connection that rules out accounts according to which justification is not
truth-conducive. These abstractly formulated complaints can be made more vivid,
in the case of the former, by imagining a person with a coherent system of
beliefs that becomes frozen, and fails to change in the face of ongoing sensory
experience; and in the case of the latter, by pointing out that, barring an
unexpected account of coherence, it seems that a wide variety of coherent
systems of belief are possible, systems that are largely disjoint or even
incompatible.
COHERENCE THEORY OF
TRUTH, EPISTEMOLOGY, FOUNDATIONALISM, JUSTIFICATION. M.R.D. Coimbra
commentaries.FONSECA. collective unconscious.JUNG. collectivity.
DISTRIBUTION. Collier,
Arthur 16801732, English philosopher, a Wiltshire parish priest whose Clavis
Universalis 1713 defends a version of immaterialism closely akin to Berkeley’s.
Matter, Collier contends, “exists in, or in dependence on mind.” He
emphatically affirms the existence of bodies, and, like Berkeley, defends
immaterialCoimbra commentaries Collier, Arthur 155 155 ism as the only alternative to
skepticism. Collier grants that bodies seem to be external, but their
“quasi-externeity” is only the effect of God’s will. In Part I of the Clavis
Collier argues as Berkeley had in his New Theory of Vision, 1709 that the
visible world is not external. In Part II he argues as Berkeley had in the
Principles, 1710, and Three Dialogues, 1713 that the external world “is a being
utterly impossible.” Two of Collier’s arguments for the “intrinsic repugnancy”
of the external world resemble Kant’s first and second antinomies. Collier
argues, e.g., that the material world is both finite and infinite; the
contradiction can be avoided, he suggests, only by denying its external
existence. Some scholars suspect that Collier deliberately concealed his debt
to Berkeley; most accept his report that he arrived at his views ten years
before he published them. Collier first refers to Berkeley in letters written
in 171415. In A Specimen of True Philosophy 1730, where he offers an
immaterialist interpretation of the opening verse of Genesis, Collier writes
that “except a single passage or two” in Berkeley’s Dialogues, there is no
other book “which I ever heard of” on the same subject as the Clavis. This is a
puzzling remark on several counts, one being that in the Preface to the
Dialogues, Berkeley describes his earlier books. Collier’s biographer reports
seeing among his papers now lost an outline, dated 1708, on “the question of
the visible world being without us or not,” but he says no more about it. The
biographer concludes that Collier’s independence cannot reasonably be doubted;
perhaps the outline would, if unearthed, establish this.
BERKELEY. K.P.W.
colligation.WHEWELL.
Collingwood, Robin George
18891943, English philosopher and historian. His father, W. G. Collingwood,
John Ruskin’s friend, secretary, and biographer, at first educated him at home
in Coniston and later sent him to Rugby School and then Oxford. Immediately
upon graduating in 1912, he was elected to a fellowship at Pembroke ; except
for service with admiralty intelligence during World War I, he remained at
Oxford until 1941, when illness compelled him to retire. Although his
Autobiography expresses strong disapproval of the lines on which, during his
lifetime, philosophy at Oxford developed, he was a “insider.” In 1934 he was elected to the
Waynflete Professorship, the first to become vacant after he had done enough
work to be a serious candidate. He was also a leading archaeologist of Roman
Britain. Although as a student Collingwood was deeply influenced by the
“realist” teaching of John Cook Wilson, he studied not only the British
idealists, but also Hegel and the contemporary Italian post-Hegelians. At
twenty-three, he published a translation of Croce’s book on Vico’s philosophy.
Religion and Philosophy 1916, the first of his attempts to present orthodox
Christianity as philosophically acceptable, has both idealist and Cook
Wilsonian elements. Thereafter the Cook Wilsonian element steadily diminished.
In Speculum Mentis1924, he investigated the nature and ultimate unity of the
four special ‘forms of experience’ art,
religion, natural science, and history
and their relation to a fifth comprehensive form philosophy. While all four, he contended, are
necessary to a full human life now, each is a form of error that is corrected
by its less erroneous successor. Philosophy is error-free but has no content of
its own: “The truth is not some perfect system of philosophy: it is simply the
way in which all systems, however perfect, collapse into nothingness on the
discovery that they are only systems.” Some critics dismissed this enterprise
as idealist a description Collingwood accepted when he wrote, but even those
who favored it were disturbed by the apparent skepticism of its result. A year
later, he amplified his views about art in Outlines of a Philosophy of Art.
Since much of what Collingwood went on to write about philosophy has never been
published, and some of it has been negligently destroyed, his thought after
Speculum Mentis is hard to trace. It will not be definitively established until
the more than 3,000 s of his surviving unpublished manuscripts deposited in the
Bodleian Library in 1978 have been thoroughly studied. They were not available
to the scholars who published studies of his philosophy as a whole up to 1990. Three
trends in how his philosophy developed, however, are discernible. The first is
that as he continued to investigate the four special forms of experience, he
came to consider each valid in its own right, and not a form of error. As early
as 1928, he abandoned the conception of the historical past in Speculum Mentis
as simply a spectacle, alien to the historian’s mind; he now proposed a theory
of it as thoughts explaining past actions that, although occurring in the past,
can be rethought in the present. Not only can the identical thought “enacted”
at a definite time in the past be “reenacted” any number of times after, but it
can be known to be so reenacted if colligation physical evidence survives that
can be shown to be incompatible with other proposed reenactments. In 193334 he
wrote a series of lectures posthumously published as The Idea of Nature in
which he renounced his skepticism about whether the quantitative material world
can be known, and inquired why the three constructive periods he recognized in
European scientific thought, the Grecian, the Renaissance, and the modern,
could each advance our knowledge of it as they did. Finally, in 1937, returning
to the philosophy of art and taking full account of Croce’s later work, he
showed that imagination expresses emotion and becomes false when it
counterfeits emotion that is not felt; thus he transformed his earlier theory
of art as purely imaginative. His later theories of art and of history remain
alive; and his theory of nature, although corrected by research since his
death, was an advance when published. The second trend was that his conception
of philosophy changed as his treatment of the special forms of experience
became less skeptical. In his beautifully written Essay on Philosophical Method
1933, he argued that philosophy has an object
the ens realissimum as the one, the true, and the good of which the objects of the special forms of
experience are appearances; but that implies what he had ceased to believe,
that the special forms of experience are forms of error. In his Principles of
Art 1938 and New Leviathan 1942 he denounced the idealist principle of Speculum
Mentis that to abstract is to falsify. Then, in his Essay on Metaphysics 1940,
he denied that metaphysics is the science of being qua being, and identified it
with the investigation of the “absolute presuppositions” of the special forms
of experience at definite historical periods. A third trend, which came to
dominate his thought as World War II approached, was to see serious philosophy
as practical, and so as having political implications. He had been, like
Ruskin, a radical Tory, opposed less to liberal or even some socialist measures
than to the bourgeois ethos from which they sprang. Recognizing European
fascism as the barbarism it was, and detesting anti-Semitism, he advocated an
antifascist foreign policy and intervention in the Spanish civil war in support
of the republic. His last major publication, The New Leviathan, impressively
defends what he called civilization against what he called barbarism; and
although it was neglected by political theorists after the war was won, the
collapse of Communism and the rise of Islamic states are winning it new
readers.
CROCE, HEGEL,
QUALITIES.
combinatory logic, a
branch of formal logic that deals with formal systems designed for the study of
certain basic operations for constructing and manipulating functions as rules,
i.e. as rules of calculation expressed by definitions. The notion of a function
was fundamental in the development of modern formal or mathematical logic that
was initiated by Frege, Peano, Russell, Hilbert, and others. Frege was the
first to introduce a generalization of the mathematical notion of a function to
include propositional functions, and he used the general notion for formally
representing logical notions such as those of a concept, object, relation,
generality, and judgment. Frege’s proposal to replace the traditional logical
notions of subject and predicate by argument and function, and thus to conceive
predication as functional application, marks a turning point in the history of
formal logic. In most modern logical systems, the notation used to express
functions, including propositional functions, is essentially that used in
ordinary mathematics. As in ordinary mathematics, certain basic notions are
taken for granted, such as the use of variables to indicate processes of
substitution. Like the original systems for modern formal logic, the systems of
combinatory logic were designed to give a foundation for mathematics. But
combinatory logic arose as an effort to carry the foundational aims further and
deeper. It undertook an analysis of notions taken for granted in the original
systems, in particular of the notions of substitution and of the use of
variables. In this respect combinatory logic was conceived by one of its
founders, H. B. Curry, to be concerned with the ultimate foundations and with
notions that constitute a “prelogic.” It was hoped that an analysis of this
prelogic would disclose the true source of the difficulties connected with the
logical paradoxes. The operation of applying a function to one of its
arguments, called application, is a primitive operation in all systems of
combinatory logic. If f is a function and x a possible argument, then the result
of the application operation is denoted fx. In mathematics this is usually
written fx, but the notation fx is more convenient in combinatory logic. The
German logician M. Schönfinkel, who started combinatory logic in 1924, observed
that it is not necessary to introduce color realism combinatory logic functions
of more than one variable, provided that the idea of a function is enlarged so
that functions can be arguments as well as values of other functions. A
function Fx,y is represented with the function f, which when applied to the
argument x has, as a value, the function fx, which, when applied to y, yields
Fx,y, i.e. fxy % Fx,y. It is therefore convenient to omit parentheses with
association to the left so that fx1 . . . xn is used for . . . fx1 . . . xn. Schönfinkel’s main result
was to show how to make the class of functions studied closed under explicit
definition by introducing two specific primitive functions, the combinators S
and K, with the rules Kxy % x, and Sxyz % xzyz. To illustrate the effect of S
in ordinary mathematical notation, let f and g be functions of two and one
arguments, respectively; then Sfg is the function such that Sfgx % fx,gx.
Generally, if ax1, . . . ,xn is an expression built up from constants and the
variables shown by means of the application operation, then there is a function
F constructed out of constants including the combinators S and K, such that Fx1
. . . xn % ax1, . . . , xn. This is essentially the meaning of the combinatory
completeness of the theory of combinators in the terminology of H. B. Curry and
R. Feys, Combinatory Logic 1958; and H. B. Curry, J. R. Hindley, and J. P.
Seldin, Combinatory Logic, vol. II 1972. The system of combinatory logic with S
and K as the only primitive functions is the simplest equation calculus that is
essentially undecidable. It is a type-free theory that allows the formation of
the term ff, i.e. self-application, which has given rise to problems of
interpretation. There are also type theories based on combinatory logic. The
systems obtained by extending the theory of combinators with functions
representing more familiar logical notions such as negation, implication, and
generality, or by adding a device for expressing inclusion in logical
categories, are studied in illative combinatory logic. The theory of
combinators exists in another, equivalent form, namely as the type-free
l-calculus created by Church in 1932. Like the theory of combinators, it was
designed as a formalism for representing functions as rules of calculation, and
it was originally part of a more general system of functions intended as a
foundation for mathematics. The l-calculus has application as a primitive
operation, but instead of building up new functions from some primitive ones by
application, new functions are here obtained by functional abstraction. If ax
is an expression built up by means of application from constants and the
variable x, then ax is considered to define a function denoted lx.a x, whose
value for the argument b is ab, i.e. lx.a xb % ab. The function lx.ax is
obtained from ax by functional abstraction. The property of combinatory
completeness or closure under explicit definition is postulated in the form of
functional abstraction. The combinators can be defined using functional
abstraction i.e., K % lx.ly.x and S % lx.ly.lz.xzyz, and conversely, in the
theory of combinators, functional abstraction can be defined. A detailed
presentation of the l-calculus is found in H. Barendregt, The Lambda Calculus,
Its Syntax and Semantics 1981. It is possible to represent the series of
natural numbers by a sequence of closed terms in the lcalculus. Certain
expressions in the l-calculus will then represent functions on the natural
numbers, and these l-definable functions are exactly the general recursive functions
or the Turing computable functions. The equivalence of l-definability and
general recursiveness was one of the arguments used by Church for what is known
as Church’s thesis, i.e., the identification of the effectively computable
functions and the recursive functions. The first problem about recursive
undecidability was expressed by Church as a problem about expressions in the l
calculus. The l-calculus thus played a historically important role in the
original development of recursion theory. Due to the emphasis in combinatory
logic on the computational aspect of functions, it is natural that its method
has been found useful in proof theory and in the development of systems of
constructive mathematics. For the same reason it has found several applications
in computer science in the construction and analysis of programming languages.
The techniques of combinatory logic have also been applied in theoretical
linguistics, e.g. in so-called Montague grammar. In recent decades combinatory
logic, like other domains of mathematical logic, has developed into a
specialized branch of mathematics, in which the original philosophical and
foundational aims and motives are of little and often no importance. One reason
for this is the discovery of the new technical applications, which were not
intended originally, and which have turned the interest toward several new
mathematical problems. Thus, the original motives are often felt to be less
urgent and only of historical significance. Another reason for the decline of the
original philosophical and foundational aims may be a growing awareness in the
philosophy of mathematics of the limitations of formal and mathematical methods
as tools for conceptual combinatory logic combinatory logic clarification, as
tools for reaching “ultimate foundations.”
commentaries on
Aristotle, the term commonly used for the Grecian commentaries on Aristotle
that take up about 15,000 s in the Berlin Commentaria in Aristotelem Graeca
18821909, still the basic edition of them. Only in the 1980s did a project
begin, under the editorship of Richard Sorabji, of King’s , London, to
translate at least the most significant portions of them into English. They had
remained the largest corpus of Grecian philosophy not translated into any
modern language. Most of these works, especially the later, Neoplatonic ones,
are much more than simple commentaries on Aristotle. They are also a mode of
doing philosophy, the favored one at this stage of intellectual history. They
are therefore important not only for the understanding of Aristotle, but also
for both the study of the pre-Socratics and the Hellenistic philosophers,
particularly the Stoics, of whom they preserve many fragments, and lastly for
the study of Neoplatonism itself and, in
the case of John Philoponus, for studying the innovations he introduces in the
process of trying to reconcile Platonism with Christianity. The commentaries
may be divided into three main groups. 1 The first group of commentaries are
those by Peripatetic scholars of the second to fourth centuries A.D., most
notably Alexander of Aphrodisias fl. c.200, but also the paraphraser Themistius
fl. c.360. We must not omit, however, to note Alexander’s predecessor Aspasius,
author of the earliest surviving commentary, one on the Nicomachean Ethics a work not commented on again until the late
Byzantine period. Commentaries by Alexander survive on the Prior Analytics,
Topics, Metaphysics IV, On the Senses, and Meteorologics, and his now lost ones
on the Categories, On the Soul, and Physics had enormous influence in later
times, particularly on Simplicius. 2 By far the largest group is that of the
Neoplatonists up to the sixth century A.D. Most important of the earlier
commentators is Porphyry 232c.309, of whom only a short commentary on the Categories
survives, together with an introduction Isagoge to Aristotle’s logical works,
which provoked many commentaries itself, and proved most influential in both
the East and through Boethius in the Latin West. The reconciling of Plato and
Aristotle is largely his work. His big commentary on the Categories was of
great importance in later times, and many fragments are preserved in that of
Simplicius. His follower Iamblichus was also influential, but his commentaries
are likewise lost. The Athenian School of Syrianus c.375437 and Proclus 41085
also commented on Aristotle, but all that survives is a commentary of Syrianus
on Books III, IV, XIII, and XIV of the Metaphysics. It is the early sixth
century, however, that produces the bulk of our surviving commentaries,
originating from the Alexandrian school of Ammonius, son of Hermeias c.435520,
but composed both in Alexandria, by the Christian John Philoponus c.490575, and
in or at least from Athens by Simplicius writing after 532. Main commentaries
of Philoponus are on Categories, Prior Analytics, Posterior Analytics, On
Generation and Corruption, On the Soul III, and Physics; of Simplicius on
Categories, Physics, On the Heavens, and perhaps On the Soul. The tradition is
carried on in Alexandria by Olympiodorus c.495565 and the Christians Elias fl.
c.540 and David an Armenian, nicknamed the Invincible, fl. c.575, and finally
by Stephanus, who was brought by the emperor to take the chair of philosophy in
Constantinople in about 610. These scholars comment chiefly on the Categories
and other introductory material, but Olympiodorus produced a commentary on the
Meteorologics. Characteristic of the Neoplatonists is a desire to reconcile
Aristotle with Platonism arguing, e.g., that Aristotle was not dismissing the
Platonic theory of Forms, and to systematize his thought, thus reconciling him
with himself. They are responding to a long tradition of criticism, during
which difficulties were raised about incoherences and contradictions in
Aristotle’s thought, and they are concerned to solve these, drawing on their
comprehensive knowledge of his writings. Only Philoponus, as a Christian, dares
to criticize him, in particular on the eternity of the world, but also on the
concept of infinity on which he produces an ingenious argument, picked up, via
the Arabs, by Bonaventure in the thirteenth century. The Categories proves a
particularly fruitful battleground, and much of the later debate between
realism and nominalism stems from arguments about the proper subject matter of
that work. The format of these commentaries is mostly that adopted by scholars
ever since, that of taking command theory of law commentaries on Aristotle
159 159 one passage, or lemma, after
another of the source work and discussing it from every angle, but there are
variations. Sometimes the general subject matter is discussed first, and then
details of the text are examined; alternatively, the lemma is taken in
subdivisions without any such distinction. The commentary can also proceed
explicitly by answering problems, or aporiai, which have been raised by
previous authorities. Some commentaries, such as the short one of Porphyry on
the Categories, and that of Iamblichus’s pupil Dexippus on the same work, have
a “catechetical” form, proceeding by question and answer. In some cases as with
Wittgenstein in modern times the commentaries are simply transcriptions by
pupils of the lectures of a teacher. This is the case, for example, with the
surviving “commentaries” of Ammonius. One may also indulge in simple paraphrase,
as does Themistius on Posterior Analysis, Physics, On the Soul, and On the
Heavens, but even here a good deal of interpretation is involved, and his works
remain interesting. An important offshoot of all this activity in the Latin
West is the figure of Boethius c.480524. It is he who first transmitted a
knowledge of Aristotelian logic to the West, to become an integral part of
medieval Scholasticism. He translated Porphyry’s Isagoge, and the whole of
Aristotle’s logical works. He wrote a double commentary on the Isagoge, and
commentaries on the Categories and On Interpretation. He is dependent
ultimately on Porphyry, but more immediately, it would seem, on a source in the
school of Proclus. 3 The third major group of commentaries dates from the late Byzantine
period, and seems mainly to emanate from a circle of scholars grouped around
the princess Anna Comnena in the twelfth century. The most important figures
here are Eustratius c.10501120 and Michael of Ephesus originally dated c.1040,
but now fixed at c.1130. Michael in particular seems concerned to comment on
areas of Aristotle’s works that had hitherto escaped commentary. He therefore
comments widely, for example, on the biological works, but also on the
Sophistical Refutations. He and Eustratius, and perhaps others, seem to have
cooperated also on a composite commentary on the Nicomachean Ethics, neglected
since Aspasius. There is also evidence of lost commentaries on the Politics and
the Rhetoric. The composite commentary on the Ethics was translated into Latin
in the next century, in England, by Robert Grosseteste, but earlier than this
translations of the various logical commentaries had been made by James of
Venice fl. c.1130, who may have even made the acquaintance of Michael of
Ephesus in Constantinople. Later in that century other commentaries were being
translated from Arabic versions by Gerard of Cremona d.1187. The influence of
the Grecian commentary tradition in the West thus resumed after the long break
since Boethius in the sixth century, but only now, it seems fair to say, is the
full significance of this enormous body of work becoming properly
appreciated.
ARISTOTLE, BOETHIUS,
NEOPLATONISM, PORPHYRY. J.M.D.
commentaries on Plato, a
term designating the works in the tradition of commentary hypomnema on Plato
that may go back to the Old Academy Crantor is attested by Proclus to have been
the first to have “commented” on the Timaeus. More probably, the tradition
arises in the first century B.C. in Alexandria, where we find Eudorus commenting,
again, on the Timaeus, but possibly also if the scholars who attribute to him
the Anonymous Theaetetus Commentary are correct on the Theaetetus. It seems
also as if the Stoic Posidonius composed a commentary of some sort on the
Timaeus. The commentary form such as we can observe in the biblical
commentaries of Philo of Alexandria owes much to the Stoic tradition of
commentary on Homer, as practiced by the second-century B.C. School of
Pergamum. It was normal to select usually consecutive portions of text lemmata
for general, and then detailed, comment, raising and answering “problems”
aporiai, refuting one’s predecessors, and dealing with points of both doctrine
and philology. By the second century A.D. the tradition of Platonic commentary
was firmly established. We have evidence of commentaries by the Middle
Platonists Gaius, Albinus, Atticus, Numenius, and Cronius, mainly on the
Timaeus, but also on at least parts of the Republic, as well as a work by
Atticus’s pupil Herpocration of Argos, in twentyfour books, on Plato’s work as
a whole. These works are all lost, but in the surviving works of Plutarch we
find exegesis of parts of Plato’s works, such as the creation of the soul in
the Timaeus 35a36d. The Latin commentary of Calcidius fourth century A.D. is
also basically Middle Platonic. In the Neoplatonic period after Plotinus, who
did not indulge in formal commentary, though many of his essays are in fact
informal commentaries, we have evidence of much more comprehensive exegetic
activity. Porphyry initiated the tradition with commentaries on the Phaedo,
commentaries on Plato commentaries on Plato 160 160 Cratylus, Sophist, Philebus, Parmenides
of which the surviving anonymous fragment of commentary is probably a part, and
the Timaeus. He also commented on the myth of Er in the Republic. It seems to
have been Porphyry who is responsible for introducing the allegorical
interpretation of the introductory portions of the dialogues, though it was
only his follower Iamblichus who also commented on all the above dialogues, as
well as the Alcibiades and the Phaedrus who introduced the principle that each
dialogue should have only one central theme, or skopos. The tradition was
carried on in the Athenian School by Syrianus and his pupils Hermeias on the
Phaedrus surviving and Proclus
Alcibiades, Cratylus, Timaeus, Parmenides
all surviving, at least in part, and continued in later times by
Damascius Phaedo, Philebus, Parmenides and Olympiodorus Alcibiades, Phaedo,
Gorgias also surviving, though sometimes
only in the form of pupils’ notes. These commentaries are not now to be valued
primarily as expositions of Plato’s thought though they do contain useful
insights, and much valuable information; they are best regarded as original
philosophical treatises presented in the mode of commentary, as is so much of
later Grecian philosophy, where it is not originality but rather faithfulness
to an inspired master and a great tradition that is being striven for.
common good, a normative
standard in Thomistic and Neo-Thomistic ethics for evaluating the justice of
social, legal, and political arrangements, referring to those arrangements that
promote the full flourishing of everyone in the community. Every good can be
regarded as both a goal to be sought and, when achieved, a source of human
fulfillment. A common good is any good sought by and/or enjoyed by two or more
persons as friendship is a good common to the friends; the common good is the
good of a “perfect” i.e., complete and politically organized human community a good that is the common goal of all who
promote the justice of that community, as well as the common source of
fulfillment of all who share in those just arrangements. ‘Common’ is an
analogical term referring to kinds and degrees of sharing ranging from mere
similarity to a deep ontological communion. Thus, any good that is a genuine
perfection of our common human nature is a common good, as opposed to merely
idiosyncratic or illusory goods. But goods are common in a deeper sense when
the degree of sharing is more than merely coincidental: two children engaged in
parallel play enjoy a good in common, but they realize a common good more fully
by engaging each other in one game; similarly, if each in a group watches the
same good movie alone at home, they have enjoyed a good in common but they
realize this good at a deeper level when they watch the movie together in a
theater and discuss it afterward. In short, common good includes aggregates of
private, individual goods but transcends these aggregates by the unique
fulfillment afforded by mutuality, shared activity, and communion of persons.
As to the sources in Thomistic ethics for this emphasis on what is deeply
shared over what merely coincides, the first is Aristotle’s understanding of us
as social and political animals: many aspects of human perfection, on this
view, can be achieved only through shared activities in communities, especially
the political community. The second is Christian Trinitarian theology, in which
the single Godhead involves the mysterious communion of three divine “persons,”
the very exemplar of a common good; human personhood, by analogy, is similarly
perfected only in a relationship of social communion. The achievement of such
intimately shared goods requires very complex and delicate arrangements of
coordination to prevent the exploitation and injustice that plague shared
endeavors. The establishment and maintenance of these social, legal, and
political arrangements is “the” common good of a political society, because the
enjoyment of all goods is so dependent upon the quality and the justice of
those arrangements. The common good of the political community includes, but is
not limited to, public goods: goods characterized by non-rivalry and
non-excludability and which, therefore, must generally be provided by public
institutions. By the principle of subsidiarity, the common good is best
promoted by, in addition to the state, many lower-level non-public societies,
associations, and individuals. Thus, religiously affiliated schools educating
non-religious minority chilcommission common good 161 161 dren might promote the common good
without being public goods.
DEDUCTION. compactness
theorem, a theorem for first-order logic: if every finite subset of a given
infinite theory T is consistent, then the whole theory is consistent. The
result is an immediate consequence of the completeness theorem, for if the
theory were not consistent, a contradiction, say ‘P and not-P’, would be
provable from it. But the proof, being a finitary object, would use only
finitely many axioms from T, so this finite subset of T would be inconsistent.
This proof of the compactness theorem is very general, showing that any
language that has a sound and complete system of inference, where each rule
allows only finitely many premises, satisfies the theorem. This is important
because the theorem immediately implies that many familiar mathematical notions
are not expressible in the language in question, notions like those of a finite
set or a well-ordering relation. The compactness theorem is important for other
reasons as well. It is the most frequently applied result in the study of
first-order model theory and has inspired interesting developments within set
theory and its foundations by generating a search for infinitary languages that
obey some analog of the theorem.
complementary class, the
class of all things not in a given class. For example, if C is the class of all
red things, then its complementary class is the class containing everything
that is not red. This latter class includes even non-colored things, like
numbers and the class C itself. Often, the context will determine a less
inclusive complementary class. If B 0 A, then the complement of B with respect
to A is A B. For example, if A is the
class of physical objects, and B is the class of red physical objects, then the
complement of B with respect to A is the class of non-red physical
objects.
completeness, a property
that something typically, a set of
axioms, a logic, a theory, a set of well-formed formulas, a language, or a set
of connectives has when it is strong
enough in some desirable respect. 1 A set of axioms is complete for the logic L
if every theorem of L is provable using those axioms. 2 A logic L has weak
semantical completeness if every valid sentence of the language of L is a
theorem of L. L has strong semantical completeness or is deductively complete
if for every set G of sentences, every logical consequence of G is deducible
from G using L. A propositional logic L is Halldén-complete if whenever A 7 B
is a theorem of L, where A and B share no variables, either A or B is a theorem
of L. And L is Post-complete if L is consistent but no stronger logic for the
same language is consistent. Reference to the “completeness” of a logic, without
further qualification, is almost invariably to either weak or strong semantical
completeness. One curious exception: second-order logic is often said to be
“incomplete,” where what is meant is that it is not axiomatizable. 3 A theory T
is negation-complete often simply complete if for every sentence A of the
lancommon notions completeness 162 162
guage of T, either A or its negation is provable in T. And T is omega-complete
if whenever it is provable in T that a property f / holds of each natural number
0, 1, . . . , it is also provable that every number has f. Generalizing on
this, any set G of well-formed formulas might be called omega complete if vA[v]
is deducible from G whenever A[t] is deducible from G for all terms t, where
A[t] is the result of replacing all free occurrences of v in A[v] by t. 4 A
language L is expressively complete if each of a given class of items is
expressible in L. Usually, the class in question is the class of twovalued
truth-functions. The propositional language whose sole connectives are - and 7
is thus said to be expressively or functionally complete, while that built up
using 7 alone is not, since classical negation is not expressible therein. Here
one might also say that the set {-,7} is expressively or functionally complete,
while {7} is not.
complexe significabile
plural: complexe significabilia, also called complexum significabile, in
medieval philosophy, what is signified only by a complexum a statement or
declarative sentence, by a that-clause, or by a dictum an accusative !
infinitive construction, as in: ‘I want him to go’. It is analogous to the
modern proposition. The doctrine seems to have originated with Adam de Wodeham
in the early fourteenth century, but is usually associated with Gregory of
Rimini slightly later. Complexe significabilia do not fall under any of the
Aristotelian categories, and so do not “exist” in the ordinary way. Still, they
are somehow real. For before creation nothing existed except God, but even then
God knew that the world was going to exist. The object of this knowledge cannot
have been God himself since God is necessary, but the world’s existence is
contingent, and yet did not “exist” before creation. Nevertheless, it was real
enough to be an object of knowledge. Some authors who maintained such a view
held that these entities were not only signifiable in a complex way by a
statement, but were themselves complex in their inner structure; the term
‘complexum significabile’ is unique to their theories. The theory of complexe
significabilia was vehemently criticized by late medieval nominalists.
compossible, capable of
existing or occurring together. E.g., two individuals are compossible provided
the existence of one of them is compatible with the existence of the other. In
terms of possible worlds, things are compossible provided there is some
possible world to which all of them belong; otherwise they are incompossible.
Not all possibilities are compossible. E.g., the extinction of life on earth by
the year 3000 is possible; so is its continuation until the year 10,000; but
since it is impossible that both of these things should happen, they are not
compossible. Leibniz held that any non-actualized possibility must be
incompossible with what is actual.
comprehension, as applied
to a term, the set of attributes implied by a term. The comprehension of
‘square’, e.g., includes being four-sided, having equal sides, and being a
plane figure, among other attributes. The comprehension of a term is contrasted
with its extension, which is the set of individuals to which the term applies.
The distinction between the extension and the comprehension of a term was
introduced in the Port-Royal Logic by Arnauld and Pierre Nicole in 1662.
Current practice is to use the expression ‘intension’ rather than
‘comprehension’. Both expressions, however, are inherently somewhat vague.
completeness, combinatory
comprehension schema 163 163
compresence, an unanalyzable relation in terms of which Russell, in his later
writings especially in Human Knowledge: Its Scope and Limits, 1948, took
concrete particular objects to be analyzable. Concrete particular objects are
analyzable in terms of complexes of qualities all of whose members are
compresent. Although this relation can be defined only ostensively, Russell states
that it appears in psychology as “simultaneity in one experience” and in
physics as “overlapping in space-time.” Complete complexes of compresence are
complexes of qualities having the following two properties: 1 all members of
the complex are compresent; 2 given anything not a member of the complex, there
is at least one member of the complex with which it is not compresent. He
argues that there is strong empirical evidence that no two complete complexes
have all their qualities in common. Finally, space-time pointinstants are
analyzed as complete complexes of compresence. Concrete particulars, on the
other hand, are analyzed as series of incomplete complexes of compresence
related by certain causal laws.
computability, roughly,
the possibility of computation on a Turing machine. The first convincing
general definition, A. N. Turing’s 1936, has been proved equivalent to the
known plausible alternatives, so that the concept of computability is generally
recognized as an absolute one. Turing’s definition referred to computations by
imaginary tape-processing machines that we now know to be capable of computing
the same functions whether simple sums and products or highly complex, esoteric
functions that modern digital computing machines could compute if provided with
sufficient storage capacity. In the form ‘Any function that is computable at
all is computable on a Turing machine’, this absoluteness claim is called
Turing’s thesis. A comparable claim for Alonzo Church’s 1935 concept of
lcomputability is called Church’s thesis. Similar theses are enunciated for
Markov algorithms, for S. C. Kleene’s notion of general recursiveness, etc. It
has been proved that the same functions are computable in all of these ways.
There is no hope of proving any of those theses, for such a proof would require
a definition of ‘computable’ a
definition that would simply be a further item in the list, the subject of a
further thesis. But since computations of new kinds might be recognizable as
genuine in particular cases, Turing’s thesis and its equivalents, if false,
might be decisively refuted by discovery of a particular function, a way of
computing it, and a proof that no Turing machine can compute it. The halting
problem for say Turing machines is the problem of devising a Turing machine
that computes the function hm, n % 1 or 0 depending on whether or not Turing
machine number m ever halts, once started with the number n on its tape. This
problem is unsolvable, for a machine that computed h could be modified to
compute a function gn, which is undefined the machine goes into an endless loop
when hn, n % 1, and otherwise agrees with hn, n. But this modified machine Turing machine number k, say would have contradictory properties: started
with k on its tape, it would eventually halt if and only if it does not. Turing
proved unsolvability of the decision problem for logic the problem of devising
a Turing machine that, applied to argument number n in logical notation,
correctly classifies it as valid or invalid by reducing the halting problem to
the decision problem, i.e., showing how any solution to the latter could be
used to solve the former problem, which we know to be unsolvable.
computer theory, the
theory of the design, uses, powers, and limits of modern electronic digital
computers. It has important bearings on philosophy, as may be seen from the
many philosophical references herein. Modern computers are a radically new kind
of machine, for they are active physical realizations of formal languages of
logic and arithmetic. Computers employ sophisticated languages, and they have
reasoning powers many orders of magnitude greater than those of any prior
machines. Because they are far superior to humans in many important tasks, they
have produced a revolution in society that is as profound as the industrial
revolution and is advancing compresence computer theory 164 164 much more rapidly. Furthermore,
computers themselves are evolving rapidly. When a computer is augmented with
devices for sensing and acting, it becomes a powerful control system, or a
robot. To understand the implications of computers for philosophy, one should
imagine a robot that has basic goals and volitions built into it, including
conflicting goals and competing desires. This concept first appeared in Karel C
v apek’s play Rossum’s Universal Robots 1920, where the word ‘robot’
originated. A computer has two aspects, hardware and programming languages. The
theory of each is relevant to philosophy. The software and hardware aspects of
a computer are somewhat analogous to the human mind and body. This analogy is
especially strong if we follow Peirce and consider all information processing
in nature and in human organisms, not just the conscious use of language.
Evolution has produced a succession of levels of sign usage and information
processing: self-copying chemicals, self-reproducing cells, genetic programs
directing the production of organic forms, chemical and neuronal signals in
organisms, unconscious human information processing, ordinary languages, and
technical languages. But each level evolved gradually from its predecessors, so
that the line between body and mind is vague. The hardware of a computer is
typically organized into three general blocks: memory, processor arithmetic
unit and control, and various inputoutput devices for communication between
machine and environment. The memory stores the data to be processed as well as
the program that directs the processing. The processor has an arithmetic-logic
unit for transforming data, and a control for executing the program. Memory,
processor, and input-output communicate to each other through a fast switching
system. The memory and processor are constructed from registers, adders,
switches, cables, and various other building blocks. These in turn are composed
of electronic components: transistors, resistors, and wires. The input and
output devices employ mechanical and electromechanical technologies as well as
electronics. Some input-output devices also serve as auxiliary memories; floppy
disks and magnetic tapes are examples. For theoretical purposes it is useful to
imagine that the computer has an indefinitely expandable storage tape. So
imagined, a computer is a physical realization of a Turing machine. The idea of
an indefinitely expandable memory is similar to the logician’s concept of an
axiomatic formal language that has an unlimited number of proofs and theorems.
The software of a modern electronic computer is written in a hierarchy of
programming languages. The higher-level languages are designed for use by human
programmers, operators, and maintenance personnel. The “machine language” is
the basic hardware language, interpreted and executed by the control. Its words
are sequences of binary digits or bits. Programs written in intermediate-level languages
are used by the computer to translate the languages employed by human users
into the machine language for execution. A programming language has
instructional means for carrying out three kinds of operations: data operations
and transfers, transfers of control from one part of the program to the other,
and program self-modification. Von Neumann designed the first modern
programming language. A programming language is general purpose, and an
electronic computer that executes it can in principle carry out any algorithm
or effective procedure, including the simulation of any other computer. Thus
the modern electronic computer is a practical realization of the abstract
concept of a universal Turing machine. What can actually be computed in
practice depends, of course, on the state of computer technology and its
resources. It is common for computers at many different spatial locations to be
interconnected into complex networks by telephone, radio, and satellite
communication systems. Insofar as users in one part of the network can control
other parts, either legitimately or illegitimately e.g., by means of a
“computer virus”, a global network of computers is really a global computer.
Such vast computers greatly increase societal interdependence, a fact of importance
for social philosophy. The theory of computers has two branches, corresponding
to the hardware and software aspects of computers. The fundamental concept of
hardware theory is that of a finite automaton, which may be expressed either as
an idealized logical network of simple computer primitives, or as the
corresponding temporal system of input, output, and internal states. A finite
automaton may be specified as a logical net of truth-functional switches and
simple memory elements, connected to one another by computer theory computer
theory idealized wires. These elements function synchronously, each wire being
in a binary state 0 or 1 at each moment of time t % 0, 1, 2, . . . . Each
switching element or “gate” executes a simple truth-functional operation not,
or, and, nor, not-and, etc. and is imagined to operate instantaneously compare
the notions of sentential connective and truth table. A memory element
flip-flop, binary counter, unit delay line preserves its input bit for one or
more time-steps. A well-formed net of switches and memory elements may not have
cycles through switches only, but it typically has feedback cycles through
memory elements. The wires of a logical net are of three kinds: input,
internal, and output. Correspondingly, at each moment of time a logical net has
an input state, an internal state, and an output state. A logical net or
automaton need not have any input wires, in which case it is a closed system.
The complete history of a logical net is described by a deterministic law: at
each moment of time t, the input and internal states of the net determine its
output state and its next internal state. This leads to the second definition
of ‘finite automaton’: it is a deterministic finite-state system characterized
by two tables. The transition table gives the next internal state produced by
each pair of input and internal states. The output table gives the output state
produced by each input state and internal state. The state analysis approach to
computer hardware is of practical value only for systems with a few elements
e.g., a binary-coded decimal counter, because the number of states increases as
a power of the number of elements. Such a rapid rate of increase of complexity
with size is called the combinatorial explosion, and it applies to many
discrete systems. However, the state approach to finite automata does yield
abstract models of law-governed systems that are of interest to logic and
philosophy. A correctly operating digital computer is a finite automaton. Alan
Turing defined the finite part of what we now call a Turing machine in terms of
states. It seems doubtful that a human organism has more computing power than a
finite automaton. A closed finite automaton illustrates Nietzsche’s law of
eternal return. Since a finite automaton has a finite number of internal
states, at least one of its internal states must occur infinitely many times in
any infinite state history. And since a closed finite automaton is
deterministic and has no inputs, a repeated state must be followed by the same
sequence of states each time it occurs. Hence the history of a closed finite
automaton is periodic, as in the law of eternal return. Idealized neurons are
sometimes used as the primitive elements of logical nets, and it is plausible
that for any brain and central nervous system there is a logical network that
behaves the same and performs the same functions. This shows the close relation
of finite automata to the brain and central nervous system. The switches and
memory elements of a finite automaton may be made probabilistic, yielding a
probabilistic automaton. These automata are models of indeterministic systems.
Von Neumann showed how to extend deterministic logical nets to systems that
contain selfreproducing automata. This is a very basic logical design relevant
to the nature of life. The part of computer programming theory most relevant to
philosophy contains the answer to Leibniz’s conjecture concerning his
characteristica universalis and calculus ratiocinator. He held that “all our
reasoning is nothing but the joining and substitution of characters, whether
these characters be words or symbols or pictures.” He thought therefore that
one could construct a universal, arithmetic language with two properties of
great philosophical importance. First, every atomic concept would be
represented by a prime number. Second, the truth-value of any logically
true-or-false statement expressed in the characteristica universalis could be
calculated arithmetically, and so any rational dispute could be resolved by
calculation. Leibniz expected to do the computation by hand with the help of a
calculating machine; today we would do it on an electronic computer. However,
we know now that Leibniz’s proposed language cannot exist, for no computer or
computer program can calculate the truth-value of every logically true-orfalse
statement given to it. This fact follows from a logical theorem about the
limits of what computer programs can do. Let E be a modern electronic computer
with an indefinitely expandable memory, so that E has the power of a universal
Turing machine. And let L be any formal language in which every arithmetic
statement can be expressed, and which is consistent. Leibniz’s proposed
characteristica universalis would be such a language. Now a computer that is
operating correctly is an active formal language, carrying out the instructions
of its program deductively. Accordingly, Gödel’s incompleteness theorems for
formal arithmetic apply to computer E. It follows from these theorems that no
program can enable computer E to decide of an arbitrary statecomputer theory
computer theory 166 166 ment of L
whether or not that statement is true. More strongly, there cannot even be a
program that will enable E to enumerate the truths of language L one after another.
Therefore Leibniz’s characteristica universalis cannot exist. Electronic
computers are the first active or “live” mathematical systems. They are the
latest addition to a long historical series of mathematical tools for inquiry:
geometry, algebra, calculus and differential equations, probability and
statistics, and modern mathematics. The most effective use of computer programs
is to instruct computers in tasks for which they are superior to humans.
Computers are being designed and programmed to cooperate with humans so that
the calculation, storage, and judgment capabilities of the two are synthesized.
The powers of such humancomputer combines will increase at an exponential rate
as computers continue to become faster, more powerful, and easier to use, while
at the same time becoming smaller and cheaper. The social implications of this
are very important. The modern electronic computer is a new tool for the logic
of discovery Peirce’s abduction. An inquirer or inquirers operating a computer
interactively can use it as a universal simulator, dynamically modeling systems
that are too complex to study by traditional mathematical methods, including
non-linear systems. Simulation is used to explain known empirical results, and
also to develop new hypotheses to be tested by observation. Computer models and
simulations are unique in several ways: complexity, dynamism, controllability,
and visual presentability. These properties make them important new tools for
modeling and thereby relevant to some important philosophical problems. A
humancomputer combine is especially suited for the study of complex holistic
and hierarchical systems with feedback cf. cybernetics, including adaptive
goal-directed systems. A hierarchical-feedback system is a dynamic structure organized
into several levels, with the compounds of one level being the atoms or
building blocks of the next higher level, and with cyclic paths of influence
operating both on and between levels. For example, a complex human institution
has several levels, and the people in it are themselves hierarchical
organizations of selfcopying chemicals, cells, organs, and such systems as the
pulmonary and the central nervous system. The behaviors of these systems are in
general much more complex than, e.g., the behaviors of traditional systems of
mechanics. Contrast an organism, society, or ecology with our planetary system
as characterized by Kepler and Newton. Simple formulas ellipses describe the
orbits of the planets. More basically, the planetary system is stable in the
sense that a small perturbation of it produces a relatively small variation in
its subsequent history. In contrast, a small change in the state of a holistic
hierarchical feedback system often amplifies into a very large difference in
behavior, a concern of chaos theory. For this reason it is helpful to model
such systems on a computer and run sample histories. The operator searches for
representative cases, interesting phenomena, and general principles of
operation. The humancomputer method of inquiry should be a useful tool for the
study of biological evolution, the actual historical development of complex
adaptive goal-directed systems. Evolution is a logical and communication
process as well as a physical and chemical process. But evolution is statistical
rather than deterministic, because a single temporal state of the system
results in a probabilistic distribution of histories, rather than in a single
history. The genetic operators of mutation and crossover, e.g., are
probabilistic operators. But though it is stochastic, evolution cannot be
understood in terms of limiting relative frequencies, for the important
developments are the repeated emergence of new phenomena, and there may be no
evolutionary convergence toward a final state or limit. Rather, to understand
evolution the investigator must simulate the statistical spectra of histories
covering critical stages of the process. Many important evolutionary phenomena
should be studied by using simulation along with observation and experiment.
Evolution has produced a succession of levels of organization: selfcopying
chemicals, self-reproducing cells, communities of cells, simple organisms,
haploid sexual reproduction, diploid sexuality with genetic dominance and
recessiveness, organisms composed of organs, societies of organisms, humans,
and societies of humans. Most of these systems are complex hierarchical
feedback systems, and it is of interest to understand how they emerged from
earlier systems. Also, the interaction of competition and cooperation at all
stages of evolution is an important subject, of relevance to social philosophy
and ethics. Some basic epistemological and metaphysical concepts enter into
computer modeling. A model is a well-developed concept of its object,
representing characteristics like structure and funccomputer theory computer
theory 167 167 tion. A model is similar
to its object in important respects, but simpler; in mathematical terminology,
a model is homomorphic to its object but not isomorphic to it. However, it is
often useful to think of a model as isomorphic to an embedded subsystem of the
system it models. For example, a gas is a complicated system of microstates of
particles, but these microstates can be grouped into macrostates, each with a
pressure, volume, and temperature satisfying the gas law PV % kT. The
derivation of this law from the detailed mechanics of the gas is a reduction of
the embedded subsystem to the underlying system. In many cases it is adequate
to work with the simpler embedded subsystem, but in other cases one must work
with the more complex but complete underlying system. The law of an embedded
subsystem may be different in kind from the law of the underlying system.
Consider, e.g., a machine tossing a coin randomly. The sequence of tosses obeys
a simple probability law, while the complex underlying mechanical system is
deterministic. The random sequence of tosses is a probabilistic system embedded
in a deterministic system, and a mathematical account of this embedding
relation constitutes a reduction of the probabilistic system to a deterministic
system. Compare the compatibilist’s claim that free choice can be embedded in a
deterministic system. Compare also a pseudorandom sequence, which is a
deterministic sequence with adequate randomness for a given finite simulation.
Note finally that the probabilistic system of quantum mechanics underlies the
deterministic system of mechanics. The ways in which models are used by
goaldirected systems to solve problems and adapt to their environments are currently
being modeled by humancomputer combines. Since computer software can be
converted into hardware, successful simulations of adaptive uses of models
could be incorporated into the design of a robot. Human intentionality involves
the use of a model of oneself in relation to others and the environment. A
problem-solving robot using such a model would constitute an important step
toward a robot with full human powers. These considerations lead to the central
thesis of the philosophy of logical mechanism: a finite deterministic automaton
can perform all human functions. This seems plausible in principle and is
treated in detail in Merrilee Salmon, ed., The Philosophy of Logical Mechanism:
Essays in Honor of Arthur W. Burks,1990. A digital computer has reasoning and
memory powers. Robots have sensory inputs for collecting information from the
environment, and they have moving and acting devices. To obtain a robot with
human powers, one would need to put these abilities under the direction of a
system of desires, purposes, and goals. Logical mechanism is a form of
mechanism or materialism, but differs from traditional forms of these doctrines
in its reliance on the logical powers of computers and the logical nature of
evolution and its products. The modern computer is a kind of complex
hierarchical physical system, a system with memory, processor, and control that
employs a hierarchy of programming languages. Humans are complex hierarchical
systems designed by evolution with
structural levels of chemicals, cells, organs, and systems e.g., circulatory,
neural, immune and linguistic levels of genes, enzymes, neural signals, and
immune recognition. Traditional materialists did not have this model of a
computer nor the contemporary understanding of evolution, and never gave an
adequate account of logic and reasoning and such phenomena as goaldirectedness
and self-modeling.
Comte: A. philosopher and
sociologist, the founder of positivism. He was educated in Paris at l’École
Polytechnique, where he briefly taught mathematics. He suffered from a mental
illness that occasionally interrupted his work. In conformity with empiricism,
Comte held that knowledge of the world arises from observation. He went beyond
many empiricists, however, in denying the possibility of knowledge of
unobservable physical objects. He conceived of positivism as a method of study
based on observation and restricted to the observable. He applied positivism
chiefly to science. He claimed that the goal of science is prediction, to be
accomplished using laws of succession. Explanation insofar as attainable has
the same structure as prediction. It subsumes events under laws of succession;
it is not causal. Influenced by Kant, he held that the causes of phenomena and
the nature of things-in-themselves are not knowable. He criticized metaphysics
for ungrounded speculation about such matters; he accused it of not keeping
imagination subordinate to observation. He advanced positivism for all the
sciences but held that each science has additional special methods, and has
laws not derivable by human intelligence from laws of other sciences. He
corresponded extensively with J. S. Mill, who Comte, Auguste Comte, Auguste
168 168 encouraged his work and
discussed it in Auguste Comte and Positivism 1865. Twentieth-century logical
positivism was inspired by Comte’s ideas. Comte was a founder of sociology,
which he also called social physics. He divided the science into two
branches statics and dynamics dealing
respectively with social organization and social development. He advocated a
historical method of study for both branches. As a law of social development,
he proposed that all societies pass through three intellectual stages, first
interpreting phenomena theologically, then metaphysically, and finally positivistically.
The general idea that societies develop according to laws of nature was adopted
by Marx. Comte’s most important work is his six-volume Cours de philosophie
positive Course in Positive Philosophy, 183042. It is an encyclopedic treatment
of the sciences that expounds positivism and culminates in the introduction of
sociology. .
conceivability,
capability of being conceived or imagined. Thus, golden mountains are
conceivable; round squares, inconceivable. As Descartes pointed out, the sort
of imaginability required is not the ability to form mental images. Chiliagons,
Cartesian minds, and God are all conceivable, though none of these can be
pictured “in the mind’s eye.” Historical references include Anselm’s definition
of God as “a being than which none greater can be conceived” and Descartes’s
argument for dualism from the conceivability of disembodied existence. Several
of Hume’s arguments rest upon the maxim that whatever is conceivable is
possible. He argued, e.g., that an event can occur without a cause, since this
is conceivable, and his critique of induction relies on the inference from the
conceivability of a change in the course of nature to its possibility. In
response, Reid maintained that to conceive is merely to understand the meaning
of a proposition. Reid argued that impossibilities are conceivable, since we
must be able to understand falsehoods. Many simply equate conceivability with
possibility, so that to say something is conceivable or inconceivable just is
to say that it is possible or impossible. Such usage is controversial, since
conceivability is broadly an epistemological notion concerning what can be
thought, whereas possibility is a metaphysical notion concerning how things can
be. The same controversy can arise regarding the compossible, or co-possible,
where two states of affairs are compossible provided it is possible that they
both obtain, and two propositions are compossible provided their conjunction is
possible. Alternatively, two things are compossible if and only if there is a
possible world containing both. Leibniz held that two things are compossible
provided they can be ascribed to the same possible world without contradiction.
“There are many possible universes, each collection of compossibles making one
of them.” Others have argued that non-contradiction is sufficient for neither
possibility nor compossibility. The claim that something is inconceivable is
usually meant to suggest more than merely an inability to conceive. It is to
say that trying to conceive results in a phenomenally distinctive mental
repugnance, e.g. when one attempts to conceive of an object that is red and
green all over at once. On this usage the inconceivable might be equated with
what one can “just see” to be impossible. There are two related usages of
‘conceivable’: 1 not inconceivable in the sense just described; and 2 such that
one can “just see” that the thing in question is possible. Goldbach’s
conjecture would seem a clear example of something conceivable in the first
sense, but not the second.
conceptualism, the view
that there are no universals and that the supposed classificatory function of
universals is actually served by particular concepts in the mind. A universal
is a property that can be instantiated by more than one individual thing or
particular at the same time; e.g., the shape of this , if identical with the
shape of the next , will be one property instantiated by two distinct
individual things at the same time. If viewed as located where the s are, then
it would be immanent. If viewed as not having spatiotemporal location itself,
but only bearing a connection, usually called instantiation or exemplification,
to things that have such location, then the shape of this would be transcendent conative conceptualism
169 169 and presumably would exist even
if exemplified by nothing, as Plato seems to have held. The conceptualist
rejects both views by holding that universals are merely concepts. Most
generally, a concept may be understood as a principle of classification,
something that can guide us in determining whether an entity belongs in a given
class or does not. Of course, properties understood as universals satisfy,
trivially, this definition and thus may be called concepts, as indeed they were
by Frege. But the conceptualistic substantive views of concepts are that
concepts are 1 mental representations, often called ideas, serving their
classificatory function presumably by resembling the entities to be classified;
or 2 brain states that serve the same function but presumably not by
resemblance; or 3 general words adjectives, common nouns, verbs or uses of such
words, an entity’s belonging to a certain class being determined by the
applicability to the entity of the appropriate word; or 4 abilities to classify
correctly, whether or not with the aid of an item belonging under 1, 2, or 3.
The traditional conceptualist holds 1. Defenders of 3 would be more properly
called nominalists. In whichever way concepts are understood, and regardless of
whether conceptualism is true, they are obviously essential to our
understanding and knowledge of anything, even at the most basic level of
cognition, namely, recognition. The classic work on the topic is Thinking and
Experience 1954 by H. H. Price, who held 4.
concursus dei, God’s
concurrence. The notion derives from a theory from medieval philosophical
theology, according to which any case of causation involving created substances
requires both the exercise of genuine causal powers inherent in creatures and
the exercise of God’s causal activity. In particular, a person’s actions are
the result of the person’s causal powers, often including the powers of
deliberation and choice, and God’s causal endorsement. Divine concurrence
maintains that the nature of God’s activity is more determinate than simply
conserving the created world in existence. Although divine concurrence agrees
with occasionalism in holding God’s power to be necessary for any event to
occur, it diverges from occasionalism insofar as it regards creatures as
causally active.
Condillac, Étienne Bonnot
de 171480, French philosopher, an empiricist who was considered the great
analytical mind of his generation. Close to Rousseau and Diderot, he stayed
within the church. He is closely perhaps excessively identified with the image of
the statue that, in the Traité des sensations Treatise on Sense Perception,
1754, he endows with the five senses to explain how perceptions are assimilated
and produce understanding cf. also his Treatise on the Origins of Human
Knowledge, 1746. He maintains a critical distance from precursors: he adopts
Locke’s tabula rasa but from his first work to Logique Logic, 1780 insists on
the creative role of the mind as it analyzes and compares sense impressions.
His Traité des animaux Treatise on Animals, 1755, which includes a proof of the
existence of God, considers sensate creatures rather than Descartes’s animaux
machines and sees God only as a final cause. He reshapes Leibniz’s monads in
the Monadologie Monadology, 1748, rediscovered in 1980. In the Langue des
calculs Language of Numbers, 1798 he proposes mathematics as a model of clear
analysis. The origin of language and creation of symbols eventually became his
major concern. His break with metaphysics in the Traité des systèmes
Treaconceptual polarity Condillac, Étienne Bonnot de 170 170 tise on Systems, 1749 has been
overemphasized, but Condillac does replace rational constructs with sense
experience and reflection. His empiricism has been mistaken for materialism,
his clear analysis for simplicity. The “ideologues,” Destutt de Tracy and
Laromiguière, found Locke in his writings. Jefferson admired him. Maine de
Biran, while critical, was indebted to him for concepts of perception and the
self; Cousin disliked him; Saussure saw him as a forerunner in the study of the
origins of language.
condition, a state of
affairs or “way things are,” most commonly referred to in relation to something
that implies or is implied by it. Let p, q, and r be schematic letters for
declarative sentences; and let P, Q, and R be corresponding nominalizations;
e.g., if p is ‘snow is white’, then P would be ‘snow’s being white’. P can be a
necessary or sufficient condition of Q in any of several senses. In the weakest
sense P is a sufficient condition of Q iff if and only if: if p then q or if P
is actual then Q is actual where the
conditional is to be read as “material,” as amounting merely to not-p &
not-q. At the same time Q is a necessary condition of P iff: if not-q then
not-p. It follows that P is a sufficient condition of Q iff Q is a necessary
condition of P. Stronger senses of sufficiency and of necessity are definable,
in terms of this basic sense, as follows: P is nomologically sufficient
necessary for Q iff it follows from the laws of nature, but not without them, that
if p then q that if q then p. P is alethically or metaphysically sufficient
necessary for Q iff it is alethically or metaphysically necessary that if p
then q that if q then p. However, it is perhaps most common of all to interpret
conditions in terms of subjunctive conditionals, in such a way that P is a
sufficient condition of Q iff P would not occur unless Q occurred, or: if P
should occur, Q would; and P is a necessary condition of Q iff Q would not
occur unless P occurred, or: if Q should occur, P would.
conditional, a compound
sentence, such as ‘if Abe calls, then Ben answers,’ in which one sentence, the
antecedent, is connected to a second, the consequent, by the connective ‘if . .
. then’. Propositions statements, etc. expressed by conditionals are called
conditional propositions statements, etc. and, by ellipsis, simply
conditionals. The ambiguity of the expression ‘if . . . then’ gives rise to a
semantic classification of conditionals into material conditionals, causal
conditionals, counterfactual conditionals, and so on. In traditional logic,
conditionals are called hypotheticals, and in some areas of mathematical logic
conditionals are called implications. Faithful analysis of the meanings of
conditionals continues to be investigated and intensely disputed.
conditional proof. 1 The argument form ‘B
follows from A; therefore, if A then B’ and arguments of this form. 2 The rule
of inference that permits one to infer a conditional given a derivation of its
consequent from its antecedent. This is also known as the rule of conditional
proof or /- introduction. G.F.S. conditional proposition.
conditioning, a form of
associative learning that occurs when changes in thought or behavior are
produced by temporal relations among events. It is common to distinguish
between two types of conditioning; one, classical or Pavlovian, in which
behavior change results from events that occur before behavior; the other,
operant or instrumental, in which behavior change occurs because of events
after behavior. Roughly, classically and operantly conditioned behavior
correspond to the everyday, folk-psychological distinction between involuntary
and voluntary or goaldirected behavior. In classical conditioning, stimuli or
events elicit a response e.g., salivation; neutral stimuli e.g., a dinner bell
gain control over behavior when paired with stimuli that already elicit
behavior e.g., the appearance of dinner. The behavior is involuntary. In
operant conditioning, stimuli or events reinforce behavior after behavior occurs;
neutral stimuli gain power to reinforce by being paired with actual
reinforcers. Here, occasions in which behavior is reinforced serve as
discriminative stimuli-evoking behavior. Operant behavior is goal-directed, if
not consciously or deliberately, then through the bond between behavior and
reinforcement. Thus, the arrangement of condiments at dinner may serve as the
discriminative stimulus evoking the request “Please pass the salt,” whereas
saying “Thank you” may reinforce the behavior of passing the salt. It is not
easy to integrate conditioning phenomena into a unified theory of conditioning.
Some theorists contend that operant conditioning is really classical
conditioning veiled by subtle temporal relations among events. Other theorists
contend that operant conditioning requires mental representations of
reinforcers and discriminative stimuli. B. F. Skinner 1904 90 argued in Walden
Two 1948 that astute, benevolent behavioral engineers can and should use
conditioning to create a social utopia.
conditio sine qua non
Latin, ‘a condition without which not’, a necessary condition; something
without which something else could not be or could not occur. For example,
being a plane figure is a conditio sine qua non for being a triangle. Sometimes
the phrase is used emphatically as a synonym for an unconditioned
presupposition, be it for an action to start or an argument to get going. I.Bo.
Condorcet, Marquis de, title of Marie-JeanAntoine-Nicolas de Caritat 174394,
French philosopher and political theorist who contributed to the Encyclopedia
and pioneered the mathematical analysis of social institutions. Although
prominent in the Revolutionary government, he was denounced for his political
views and died in prison. Condorcet discovered the voting paradox, which shows
that majoritarian voting can produce cyclical group preferences. Suppose, for
instance, that voters A, B, and C rank proposals x, y, and z as follows: A:
xyz, B: yzx, and C: zxy. Then in majoritarian voting x beats y and y beats z,
but z in turn beats x. So the resulting group preferences are cyclical. The
discovery of this problem helped initiate social choice theory, which evaluates
voting systems. Condorcet argued that any satisfactory voting system must
guarantee selection of a proposal that beats all rivals in majoritarian
competition. Such a proposal is called a Condorcet winner. His jury theorem
says that if voters register their opinions about some matter, such as whether
a defendant is guilty, and the probabilities that individual voters are right
are greater than ½, equal, and independent, then the majority vote is more
likely to be correct than any individual’s or minority’s vote. Condorcet’s main
works are Essai sur l’application de l’analyse à la probabilité des décisions
rendues à la pluralité des voix Essay on the Application of Analysis to the
Probability of Decisions Reached by a Majority of Votes, 1785; and a posthumous
treatise on social issues, Esquisse d’un tableau historique des progrès de
l’esprit humain Sketch for a Historical Picture of the Progress of the Human
Mind, 1795.
confirmation, an
evidential relation between evidence and any statement especially a scientific
hypothesis that this evidence supports. It is essential to distinguish two
distinct, and fundamentally different, meanings of the term: 1 the incremental
sense, in which a piece of evidence contributes at least some degree of support
to the hypothesis in question e.g.,
finding a fingerprint of the suspect at the scene of the crime lends some
weight to the hypothesis that the suspect is guilty; and 2 the absolute sense,
in which a body of evidence provides strong support for the hypothesis in
question e.g., a case presented by a
prosecutor making it practically certain that the suspect is guilty. If one
thinks of confirmation in terms of probability, then evidence that increases
the probability of a hypothesis confirms it incrementally, whereas evidence
that renders a hypothesis highly probable confirms it absolutely. In each of
the two foregoing senses one can distinguish three types of confirmation: i
qualitative, ii quantitative, and iii comparative. i Both examples in the
preceding paragraph illustrate qualitative confirmation, for no numerical
values of the degree of confirmation were mentioned. ii If a gambler, upon
learning that an opponent holds a certain card, asserts that her chance of
winning has increased from 2 /3 to ¾, the claim is an instance of quantitative
incremental confirmation. If a physician states that, on the basis of an X-ray,
the probability that the patient has tuberculosis is .95, that claim
exemplifies quantitative absolute confirmation. In the incremental sense, any
case of quantitative confirmation involves a difference between two probability
values; in the absolute sense, any case of quantitative confirmation involves
only one probability value. iii Comparative confirmation in the incremental
sense would be illustrated if an investigator said that possession of the
murder weapon weighs more heavily against the suspect conditiio sine qua non
confirmation 172 172 than does the
fingerprint found at the scene of the crime. Comparative confirmation in the
absolute sense would occur if a prosecutor claimed to have strong cases against
two suspects thought to be involved in a crime, but that the case against one
is stronger than that against the other. Even given recognition of the
foregoing six varieties of confirmation, there is still considerable
controversy regarding its analysis. Some authors claim that quantitative
confirmation does not exist; only qualitative and/or comparative confirmation
are possible. Some authors maintain that confirmation has nothing to do with
probability, whereas others known as
Bayesians analyze confirmation
explicitly in terms of Bayes’s theorem in the mathematical calculus of
probability. Among those who offer probabilistic analyses there are differences
as to which interpretation of probability is suitable in this context. Popper
advocates a concept of corroboration that differs fundamentally from
confirmation. Many real or apparent paradoxes of confirmation have been posed;
the most famous is the paradox of the ravens. It is plausible to suppose that
‘All ravens are black’ can be incrementally confirmed by the observation of one
of its instances, namely, a black crow. However, ‘All ravens are black’ is
logically equivalent to ‘All non-black things are non-ravens.’ By parity of
reasoning, an instance of this statement, namely, any nonblack non-raven e.g.,
a white shoe, should incrementally confirm it. Moreover, the equivalence
condition whatever confirms a hypothesis
must equally confirm any statement logically equivalent to it seems eminently reasonable. The result
appears to facilitate indoor ornithology, for the observation of a white shoe
would seem to confirm incrementally the hypothesis that all ravens are black.
Many attempted resolutions of this paradox can be found in the literature.
Confucianism, a Chinese
school of thought and set of moral, ethical, and political teachings usually
considered to be founded by Confucius. Before the time of Confucius sixthfifth
century B.C., a social group, the Ju literally, ‘weaklings’ or ‘foundlings’,
existed whose members were ritualists and sometimes also teachers by
profession. Confucius belonged to this group; but although he retained the
interest in rituals, he was also concerned with the then chaotic social and
political situation and with the search for remedies, which he believed to lie
in the restoration and maintenance of certain traditional values and norms. Later
thinkers who professed to be followers of Confucius shared such concern and
belief and, although they interpreted and developed Confucius’s teachings in
different ways, they are often regarded as belonging to the same school of
thought, traditionally referred to by Chinese scholars as Ju-chia, or the
school of the Ju. The term ‘Confucianism’ is used to refer to some or all of
the range of phenomena including the way of life of the Ju as a group of
ritualists, the school of thought referred to as Ju-chia, the ethical, social,
and political ideals advocated by this school of thought which include but go
well beyond the practice of rituals, and the influence of such ideals on the
actual social and political order and the life of the Chinese. As a school of
thought, Confucianism is characterized by a common ethical ideal which includes
an affective concern for all living things, varying in degree and nature
depending on how such things relate to oneself; a reverential attitude toward
others manifested in the observance of formal rules of conduct such as the way
to receive guests; an ability to determine the proper course of conduct,
whether this calls for observance of traditional norms or departure from such
norms; and a firm commitment to proper conduct so that one is not swayed by
adverse circumstances such as poverty or death. Everyone is supposed to have
the ability to attain this ideal, and people are urged to exercise constant
vigilance over their character so that they can transform themselves to embody
this ideal fully. In the political realm, a ruler who embodies the ideal will
care about and provide for the people, who will be attracted to him; the moral
example he sets will have a transforming effect on the people. Different
Confucian thinkers have different conceptions of the way the ethical ideal may
be justified and attained. Mencius fourth century B.C. regarded the ideal as a
full realization of certain incipient moral inclinations shared by human
beings, and emphasized the need to reflect on and fully develop such
inclinations. Hsün Tzu third century B.C. regarded it as a way of optimizing
the satisfaction of presocial confirmation, degree of Confucianism 173 173 human desires, and emphasized the need
to learn the norms governing social distinctions and let them transform and
regulate the pursuit of satisfaction of such desires. Different kinds of
Confucian thought continued to evolve, yielding such major thinkers as Tung
Chung-shu second century B.C. and Han Yü A.D. 768824. Han Yü regarded Mencius
as the true transmitter of Confucius’s teachings, and this view became
generally accepted, largely through the efforts of Chu Hsi 11301200. The
Mencian form of Confucian thought continued to be developed in different ways
by such major thinkers as Chu Hsi, Wang Yang-ming 14721529, and Tai Chen
172377, who differed concerning the way to attain the Confucian ideal and the
metaphysics undergirding it. Despite these divergent developments, Confucius
continued to be revered within this tradition of thought as its first and most
important thinker, and the Confucian school of thought continued to exert great
influence on Chinese life and on the social and political order down to the
present century.
Confucius, also known as
K’ung Ch’iu, K’ung Tzu, Kung Fu-tzu sixthfifth century B.C., Chinese thinker
usually regarded as founder of the Confucian school of thought. His teachings
are recorded in the Lun Yü or Analects, a collection of sayings by him and by
disciples, and of conversations between him and his disciples. His highest
ethical ideal is jen humanity, goodness, which includes an affective concern
for the wellbeing of others, desirable attributes e.g. filial piety within
familial, social, and political institutions, and other desirable attributes
such as yung courage, bravery. An important part of the ideal is the general
observance of li rites, the traditional norms governing conduct between people
related by their different social positions, along with a critical reflection
on such norms and a preparedness to adapt them to present circumstances. Human
conduct should not be dictated by fixed rules, but should be sensitive to
relevant considerations and should accord with yi rightness, duty. Other
important concepts include shu consideration, reciprocity, which involves not
doing to another what one would not have wished done to oneself, and chung
loyalty, commitment, interpreted variously as a commitment to the exercise of
shu, to the norms of li, or to one’s duties toward superiors and equals. The
ideal of jen is within the reach of all, and one should constantly reflect on
one’s character and correct one’s deficiencies. Jen has transformative powers
that should ideally be the basis of government; a ruler with jen will care
about and provide for the people, who will be attracted to him, and the moral
example he sets will inspire people to reform themselves.
conjunction, the logical
operation on a pair of propositions that is typically indicated by the
coordinating conjunction ‘and’. The truth table for conjunction is Besides
‘and’, other coordinating conjunctions, including ‘but’, ‘however’, ‘moreover’,
and ‘although’, can indicate logical conjunction, as can the semicolon ‘;’ and
the comma ‘,’.
conjunction elimination.
1 The argument form ‘A and B; therefore, A or B’ and arguments of this form. 2
The rule of inference that permits one to infer either conjunct from a
conjunction. This is also known as the rule of simplification or
8-elimination.
conjunction introduction.
1 The argument form ‘A, B; therefore, A and B’ and arguments of this form. 2
The rule of inference that permits one to infer a conjunction from its two
conjuncts. This is also known as the rule of conjunction introduction,
8-introduction, or adjunction.
connected, said of a
relation R where, for any two distinct elements x and y of the domain, either
xRy or yRx. R is said to be strongly connected if, for any two elements x and
y, either xRy or yRx, even if x and y are identical. Given the domain of
positive integers, for instance, the relation ‹ is connected, since for any two
distinct numbers a and b, either a ‹ b or b ‹ a. ‹ is not strongly connected,
however, since if a % b we do not have either a ‹ b or b ‹ a. The relation o,
however, is Confucius connected 174 174
strongly connected, since either a o b or b o a for any two numbers, including
the case where a % b. An example of a relation that is not connected is the
subset relation 0, since it is not true that for any two sets A and B, either A
0 B or B 0 A.
connectionism, an approach
to modeling cognitive systems which utilizes networks of simple processing
units that are inspired by the basic structure of the nervous system. Other
names for this approach are neural network modeling and parallel distributed
processing. Connectionism was pioneered in the period 194065 by researchers
such as Frank Rosenblatt and Oliver Selfridge. Interest in using such networks
diminished during the 1970s because of limitations encountered by existing
networks and the growing attractiveness of the computer model of the mind
according to which the mind stores symbols in memory and registers and performs
computations upon them. Connectionist models enjoyed a renaissance in the
1980s, partly as the result of the discovery of means of overcoming earlier limitations
e.g., development of the back-propagation learning algorithm by David
Rumelhart, Geoffrey Hinton, and Ronald Williams, and of the Boltzmann-machine
learning algorithm by David Ackley, Geoffrey Hinton, and Terrence Sejnowski,
and partly as limitations encountered with the computer model rekindled
interest in alternatives. Researchers employing connectionist-type nets are
found in a variety of disciplines including psychology, artificial
intelligence, neuroscience, and physics. There are often major differences in
the endeavors of these researchers: psychologists and artificial intelligence
researchers are interested in using these nets to model cognitive behavior,
whereas neuroscientists often use them to model processing in particular neural
systems. A connectionist system consists of a set of processing units that can
take on activation values. These units are connected so that particular units
can excite or inhibit others. The activation of any particular unit will be
determined by one or more of the following: inputs from outside the system, the
excitations or inhibitions supplied by other units, and the previous activation
of the unit. There are a variety of different architectures invoked in
connectionist systems. In feedforward nets units are clustered into layers and
connections pass activations in a unidirectional manner from a layer of input
units to a layer of output units, possibly passing through one or more layers
of hidden units along the way. In these systems processing requires one pass of
processing through the network. Interactive nets exhibit no directionality of
processing: a given unit may excite or inhibit another unit, and it, or another
unit influenced by it, might excite or inhibit the first unit. A number of
processing cycles will ensue after an input has been given to some or all of
the units until eventually the network settles into one state, or cycles
through a small set of such states. One of the most attractive features of
connectionist networks is their ability to learn. This is accomplished by
adjusting the weights connecting the various units of the system, thereby
altering the manner in which the network responds to inputs. To illustrate the
basic process of connectionist learning, consider a feedforward network with just
two layers of units and one layer of connections. One learning procedure
commonly referred to as the delta rule first requires the network to respond,
using current weights, to an input. The activations on the units of the second
layer are then compared to a set of target activations, and detected
differences are used to adjust the weights coming from active input units. Such
a procedure gradually reduces the difference between the actual response and
the target response. In order to construe such networks as cognitive models it
is necessary to interpret the input and output units. Localist interpretations
treat individual input and output units as representing concepts such as those
found in natural language. Distributed interpretations correlate only patterns
of activation of a number of units with ordinary language concepts. Sometimes
but not always distributed models will interpret individual units as
corresponding to microfeatures. In one interesting variation on distributed
representation, known as coarse coding, each symbol will be assigned to a
different subset of the units of the system, and the symbol will be viewed as
active only if a predefined number of the assigned units are active. A number
of features of connectionist nets make them particularly attractive for
modeling cognitive phenomena in addition to their ability to learn from
experience. They are extremely efficient at pattern-recognition tasks and often
generalize very well from training inputs to similar test inputs. They can
often recover complete patterns from partial inputs, making them good models
for content-addressable memory. Interactive networks are particularly useful in
modeling cognitive tasks in which multiple constraints must be satisfied
simultaneously, or in which the connectionism connectionism 175 175 goal is to satisfy competing constraints
as well as possible. In a natural manner they can override some constraints on
a problem when it is not possible to satisfy all, thus treating the constraints
as soft. While the cognitive connectionist models are not intended to model
actual neural processing, they suggest how cognitive processes can be realized
in neural hardware. They also exhibit a feature demonstrated by the brain but
difficult to achieve in symbolic systems: their performance degrades gracefully
as units or connections are disabled or the capacity of the network is
exceeded, rather than crashing. Serious challenges have been raised to the
usefulness of connectionism as a tool for modeling cognition. Many of these
challenges have come from theorists who have focused on the complexities of
language, especially the systematicity exhibited in language. Jerry Fodor and
Zenon Pylyshyn, for example, have emphasized the manner in which the meaning of
complex sentences is built up compositionally from the meaning of components,
and argue both that compositionality applies to thought generally and that it
requires a symbolic system. Therefore, they maintain, while cognitive systems
might be implemented in connectionist nets, these nets do not characterize the
architecture of the cognitive system itself, which must have capacities for
symbol storage and manipulation. Connectionists have developed a variety of
responses to these objections, including emphasizing the importance of
cognitive functions such as pattern recognition, which have not been as
successfully modeled by symbolic systems; challenging the need for symbol
processing in accounting for linguistic behavior; and designing more complex
connectionist architectures, such as recurrent networks, capable of responding
to or producing systematic structures.
connotation. 1 The ideas
and associations brought to mind by an expression used in contrast with
‘denotation’ and ‘meaning’. 2 In a technical use, the properties jointly
necessary and sufficient for the correct application of the expression in
question.
consequentialism, the
doctrine that the moral rightness of an act is determined solely by the
goodness of the act’s consequences. Prominent consequentialists include J. S.
Mill, Moore, and Sidgwick. Maximizing versions of consequentialism the most common sort hold that an act is morally right if and only
if it produces the best consequences of those acts available to the agent.
Satisficing consequentialism holds that an act is morally right if and only if
it produces enough good consequences on balance. Consequentialist theories are
often contrasted with deontological ones, such as Kant’s, which hold that the
rightness of an act is determined at least in part by something other than the
goodness of the act’s consequences. A few versions of consequentialism are
agentrelative: that is, they give each agent different aims, so that different
agents’ aims may conflict. For instance, egoistic consequentialism holds that the
moral rightness of an act for an agent depends solely on the goodness of its
consequences for him or her. However, the vast majority of consequentialist
theories have been agent-neutral and consequentialism is often defined in a
more restrictive way so that agentrelative versions do not count as
consequentialist. A doctrine is agent-neutral when it gives to each agent the
same ultimate aims, so that different agents’ aims cannot conflict. For
instance, utilitarianism holds that an act is morally right if and only if it
produces more happiness for the sentient beings it affects than any other act
available to the agent. This gives each agent the same ultimate aim, and so is
agent-neutral. connective, propositional consequentialism 176 176 Consequentialist theories differ over
what features of acts they hold to determine their goodness. Utilitarian
versions hold that the only consequences of an act relevant to its goodness are
its effects on the happiness of sentient beings. But some consequentialists
hold that the promotion of other things matters too achievement, autonomy, knowledge, or
fairness, for instance. Thus utilitarianism, as a maximizing, agent-neutral,
happiness-based view is only one of a broad range of consequentialist
theories.
consequentia mirabilis,
the logical principle that if a statement follows from its own negation it must
be true. Strict consequentia mirabilis is the principle that if a statement
follows logically from its own negation it is logically true. The principle is
often connected with the paradoxes of strict implication, according to which
any statement follows from a contradiction. Since the negation of a tautology
is a contradiction, every tautology follows from its own negation. However, if
every expression of the form ‘if p then q’ implies ‘not-p or q’ they need not
be equivalent, then from ‘if not-p then p’ we can derive ‘not-not-p or p’ and
by the principles of double negation and repetition derive p. Since all of
these rules are unexceptionable the principle of consequentia mirabilis is also
unexceptionable. It is, however, somewhat counterintuitive, hence the name ‘the
astonishing implication’, which goes back to its medieval discoverers or
rediscoverers.
consistency, in
traditional Aristotelian logic, a semantic notion: two or more statements are
called consistent if they are simultaneously true under some interpretation
cf., e.g., W. S. Jevons, Elementary Lessons in Logic, 1870. In modern logic
there is a syntactic definition that also fits complex e.g., mathematical
theories developed since Frege’s Begriffsschrift 1879: a set of statements is
called consistent with respect to a certain logical calculus, if no formula ‘P
& P’ is derivable from those statements by the rules of the calculus; i.e.,
the theory is free from contradictions. If these definitions are equivalent for
a logic, we have a significant fact, as the equivalence amounts to the
completeness of its system of rules. The first such completeness theorem was
obtained for sentential or propositional logic by Paul Bernays in 1918 in his
Habilitationsschrift that was partially published as Axiomatische Untersuchung
des Aussagen-Kalküls der “Principia Mathematica,” 1926 and, independently, by
Emil Post in Introduction to a General Theory of Elementary Propositions, 1921;
the completeness of predicate logic was proved by Gödel in Die Vollständigkeit
der Axiome des logischen Funktionenkalküls, 1930. The crucial step in such
proofs shows that syntactic consistency implies semantic consistency. Cantor
applied the notion of consistency to sets. In a well-known letter to Dedekind
1899 he distinguished between an inconsistent and a consistent multiplicity;
the former is such “that the assumption that all of its elements ‘are together’
leads to a contradiction,” whereas the elements of the latter “can be thought
of without contradiction as ‘being together.’ “ Cantor had conveyed these
distinctions and their motivation by letter to Hilbert in 1897 see W. Purkert
and H. J. Ilgauds, Georg Cantor, 1987. Hilbert pointed out explicitly in 1904
that Cantor had not given a rigorous criterion for distinguishing between
consistent and inconsistent multiplicities. Already in his Über den Zahlbegriff
1899 Hilbert had suggested a remedy by giving consistency proofs for suitable
axiomatic systems; e.g., to give the proof of the “existence of the totality of
real numbers or in the terminology of G.
Cantor the proof of the fact that the
system of real numbers is a consistent complete set” by establishing the
consistency of an axiomatic characterization of the reals in modern terminology, of the theory of
complete, ordered fields. And he claimed, somewhat indeterminately, that this
could be done “by a suitable modification of familiar methods.” After 1904,
Hilbert pursued a new way of giving consistency proofs. This novel way of
proceeding, still aiming for the same goal, was to make use of the
formalization of the theory at hand. However, in the formulation of Hilbert’s
Program during the 1920s the point of consistency proofs was no longer to
guarantee the existence of suitable sets, but rather to establish the
instrumental usefulness of strong mathematical consequentialism, indirect
consistency 177 177 theories T, like
axiomatic set theory, relative to finitist mathematics. That focus rested on
the observation that the statement formulating the syntactic consistency of T
is equivalent to the reflection principle Pra, ‘s’ P s; here Pr is the finitist
proof predicate for T, s is a finitistically meaningful statement, and ‘s’ its
translation into the language of T. If one could establish finitistically the
consistency of T, one could be sure on
finitist grounds that T is a reliable
instrument for the proof of finitist statements. There are many examples of
significant relative consistency proofs: i non-Euclidean geometry relative to
Euclidean, Euclidean geometry relative to analysis; ii set theory with the
axiom of choice relative to set theory without the axiom of choice, set theory
with the negation of the axiom of choice relative to set theory; iii classical
arithmetic relative to intuitionistic arithmetic, subsystems of classical
analysis relative to intuitionistic theories of constructive ordinals. The
mathematical significance of relative consistency proofs is often brought out
by sharpening them to establish conservative extension results; the latter may
then ensure, e.g., that the theories have the same class of provably total
functions. The initial motivation for such arguments is, however, frequently
philosophical: one wants to guarantee the coherence of the original theory on
an epistemologically distinguished basis.
Constant, Benjamin, in
full, Henri-Benjamin Constant de Rebecque 17671830, Swiss-born defender of
liberalism and passionate analyst of French and European politics. He welcomed
the French Revolution but not the Reign of Terror, the violence of which he
avoided by accepting a lowly diplomatic post in Braunschweig 1787 94. In 1795
he returned to Paris with Madame de Staël and intervened in parliamentary
debates. His pamphlets opposed both extremes, the Jacobin and the Bonapartist.
Impressed by Rousseau’s Social Contract, he came to fear that like Napoleon’s
dictatorship, the “general will” could threaten civil rights. He had first
welcomed Napoleon, but turned against his autocracy. He favored parliamentary
democracy, separation of church and state, and a bill of rights. The high point
of his political career came with membership in the Tribunat 180002, a
consultative chamber appointed by the Senate. His centrist position is evident
in the Principes de politique 180610. Had not republican terror been as
destructive as the Empire? In chapters 1617, Constant opposes the liberty of
the ancients and that of the moderns. He assumes that the Grecian world was
given to war, and therefore strengthened “political liberty” that favors the
state over the individual the liberty of the ancients. Fundamentally
optimistic, he believed that war was a thing of the past, and that the modern
world needs to protect “civil liberty,” i.e. the liberty of the individual the
liberty of the moderns. The great merit of Constant’s comparison is the
analysis of historical forces, the theory that governments must support current
needs and do not depend on deterministic factors such as the size of the state,
its form of government, geography, climate, and race. Here he contradicts
Montesquieu. The opposition between ancient and modern liberty expresses a
radical liberalism that did not seem to fit French politics. However, it was
the beginning of the liberal tradition, contrasting political liberty in the
service of the state with the civil liberty of the citizen cf. Mill’s On
Liberty, 1859, and Berlin’s Two Concepts of Liberty, 1958. Principes remained
in manuscript until 1861; the scholarly editions of Étienne Hofmann 1980 are
far more recent. Hofmann calls Principes the essential text between Montesquieu
and Tocqueville. It was translated into English as Constant, Political Writings
ed. Biancamaria Fontana, 1988 and 1997. Forced into retirement by Napoleon, Constant
wrote his literary masterpieces, Adolphe and the diaries. He completed the
Principes, then turned to De la religion 6 vols., which he considered his
supreme achievement.
constitution, a relation
between concrete particuconsistency, axiom of constitution 178 178 lars including objects and events and
their parts, according to which at some time t, a concrete particular is said
to be constituted by the sum of its parts without necessarily being identical
with that sum. For instance, at some specific time t, Mt. Everest is
constituted by the various chunks of rock and other matter that form Everest at
t, though at t Everest would still have been Everest even if, contrary to fact,
some particular rock that is part of the sum had been absent. Hence, although
Mt. Everest is not identical to the sum of its material parts at t, it is
constituted by them. The relation of constitution figures importantly in recent
attempts to articulate and defend metaphysical physicalism naturalism. To
capture the idea that all that exists is ultimately physical, we may say that
at the lowest level of reality, there are only microphysical phenomena,
governed by the laws of microphysics, and that all other objects and events are
ultimately constituted by objects and events at the microphysical level.
contextualism, the view
that inferential justification always takes place against a background of
beliefs that are themselves in no way evidentially supported. The view has not
often been defended by name, but Dewey, Popper, Austin, and Wittgenstein are
arguably among its notable exponents. As this list perhaps suggests,
contextualism is closely related to the “relevant alternatives” conception of
justification, according to which claims to knowledge are justified not by
ruling out any and every logically possible way in which what is asserted might
be false or inadequately grounded, but by excluding certain especially relevant
alternatives or epistemic shortcomings, these varying from one context of
inquiry to another. Formally, contextualism resembles foundationalism. But it
differs from traditional, or substantive, foundationalism in two crucial
respects. First, foundationalism insists that basic beliefs be self-justifying
or intrinsically credible. True, for contemporary foundationalists, this
intrinsic credibility need not amount to incorrigibility, as earlier theorists
tended to suppose: but some degree of intrinsic credibility is indispensable
for basic beliefs. Second, substantive foundational theories confine intrinsic
credibility, hence the status of being epistemologically basic, to beliefs of
some fairly narrowly specified kinds. By contrast, contextualists reject all
forms of the doctrine of intrinsic credibility, and in consequence place no
restrictions on the kinds of beliefs that can, in appropriate circumstances,
function as contextually basic. They regard this as a strength of their
position, since explaining and defending attributions of intrinsic credibility
has always been the foundationalist’s main problem. Contextualism is also
distinct from the coherence theory of justification, foundationalism’s
constitutive principle contextualism 179
179 traditional rival. Coherence theorists are as suspicious as
contextualists of the foundationalist’s specified kinds of basic beliefs. But
coherentists react by proposing a radically holistic model of inferential
justification, according to which a belief becomes justified through
incorporation into a suitably coherent overall system of beliefs or “total
view.” There are many well-known problems with this approach: the criteria of
coherence have never been very clearly articulated; it is not clear what
satisfying such criteria has to do with making our beliefs likely to be true;
and since it is doubtful whether anyone has a very clear picture of his system
of beliefs as a whole, to insist that justification involves comparing the
merits of competing total views seems to subject ordinary justificatory
practices to severe idealization. Contextualism, in virtue of its formal affinity
with foundationalism, claims to avoid all such problems. Foundationalists and
coherentists are apt to respond that contextualism reaps these benefits by
failing to show how genuinely epistemic justification is possible.
Contextualism, they charge, is finally indistinguishable from the skeptical
view that “justification” depends on unwarranted assumptions. Even if, in
context, these are pragmatically acceptable, epistemically speaking they are
still just assumptions. This objection raises the question whether
contextualists mean to answer the same questions as more traditional theorists,
or answer them in the same way. Traditional theories of justification are
framed so as to respond to highly general skeptical questions e.g., are we justified in any of our beliefs
about the external world? It may be that contextualist theories are or should
be advanced, not as direct answers to skepticism, but in conjunction with
attempts to diagnose or dissolve traditional skeptical problems. Contextualists
need to show how and why traditional demands for “global” justification
misfire, if they do. If traditional skeptical problems are taken at face value,
it is doubtful whether contextualism can answer them.
Continental philosophy,
the gradually changing spectrum of philosophical views that in the twentieth
century developed in Continental Europe and that are notably different from the
various forms of analytic philosophy that during the same period flourished in
the Anglo-American world. Immediately after World War II the expression was
more or less synonymous with ‘phenomenology’. The latter term, already used
earlier in German idealism, received a completely new meaning in the work of
Husserl. Later on the term was also applied, often with substantial changes in
meaning, to the thought of a great number of other Continental philosophers
such as Scheler, Alexander Pfander, Hedwig Conrad-Martius, Nicolai Hartmann,
and most philosophers mentioned below. For Husserl the aim of philosophy is to
prepare humankind for a genuinely philosophical form of life, in and through
which each human being gives him- or herself a rule through reason. Since the
Renaissance, many philosophers have tried in vain to materialize this aim. In
Husserl’s view, the reason was that philosophers failed to use the proper
philosophical method. Husserl’s phenomenology was meant to provide philosophy
with the method needed. Among those deeply influenced by Husserl’s ideas the
so-called existentialists must be mentioned first. If ‘existentialism’ is construed
strictly, it refers mainly to the philosophy of Sartre and Beauvoir. In a very
broad sense it refers to the ideas of an entire group of thinkers influenced
methodologically by Husserl and in content by Marcel, Heidegger, Sartre, or
Merleau-Ponty. In this case one often speaks of existential phenomenology. When
Heidegger’s philosophy became better known in the Anglo-American world,
‘Continental philosophy’ received again a new meaning. From Heidegger’s first
publication, Being and Time 1927, it was clear that his conception of
phenomenology differs from that of Husserl in several important respects. That
is why he qualified the term and spoke of hermeneutic phenomenology and
clarified the expression by examining the “original” meaning of the Grecian words
from which the term was formed. In his view phenomenology must try “to let that
which shows itself be seen from itself in the very way in which it shows itself
from itself.” Heidegger applied the method first to the mode of being of man
with the aim of approaching the question concerning the meaning of being itself
through this phenomenological interpretation. Of those who took their point of
departure from Heidegger, but also tried to go beyond him, Gadamer and Ricoeur
must be mentioned. The structuralist movement in France added another
connotation to ‘Continental philosophy’. The term structuralism above all
refers to an activity, a way of knowing, speaking, and acting that extends over
a number of distinguished domains of human activity: linguistics, aesthetics,
anthropology, psychology, psychoanalysis, mathematics, philosophy of science,
and philosophy itself. Structuralism, which became a fashion in Paris and later
in Western Europe generally, reached its high point on the Continent between
1950 and 1970. It was inspired by ideas first formulated by Russian formalism
191626 and Czech structuralism 192640, but also by ideas derived from the works
of Marx and Freud. In France Foucault, Barthes, Althusser, and Derrida were the
leading figures. Structuralism is not a new philosophical movement; it must be
characterized by structuralist activity, which is meant to evoke ever new
objects. This can be done in a constructive and a reconstructive manner, but
these two ways of evoking objects can never be separated. One finds the
constructive aspect primarily in structuralist aesthetics and linguistics,
whereas the reconstructive aspect is more apparent in philosophical reflections
upon the structuralist activity. Influenced by Nietzschean ideas, structuralism
later developed in a number of directions, including poststructuralism; in this
context the works of Gilles Deleuze, Lyotard, Irigaray, and Kristeva must be
mentioned. After 1970 ‘Continental philosophy’ received again a new
connotation: deconstruction. At first deconstruction presented itself as a
reaction against philosophical hermeneutics, even though both deconstruction
and hermeneutics claim their origin in Heidegger’s reinterpretation of
Husserl’s phenomenology. The leading philosopher of the movement is Derrida,
who at first tried to think along phenomenological and structuralist lines.
Derrida formulated his “final” view in a linguistic form that is both complex
and suggestive. It is not easy in a few sentences to state what deconstruction
is. Generally speaking one can say that what is being deconstructed is texts;
they are deconstructed to show that there are conflicting conceptions of
meaning and implication in every text so that it is never possible definitively
to show what a text really means. Derrida’s own deconstructive work is
concerned mainly with philosophical texts, whereas others apply the “method”
predominantly to literary texts. What according to Derrida distinguished
philosophy is its reluctance to face the fact that it, too, is a product of
linguistic and rhetorical figures. Deconstruction is here that process of close
reading that focuses on those elements where philosophers in their work try to
erase all knowledge of its own linguistic and rhetorical dimensions. It has
been said that if construction typifies modern thinking, then deconstruction is
the mode of thinking that radically tries to overcome modernity. Yet this view
is simplistic, since one also deconstructs Plato and many other thinkers and
philosophers of the premodern age. People concerned with social and political
philosophy who have sought affiliation with Continental philosophy often appeal
to the so-called critical theory of the Frankfurt School in general, and to
Habermas’s theory of communicative action in particular. Habermas’s view, like
the position of the Frankfurt School in general, is philosophically eclectic.
It tries to bring into harmony ideas derived from Kant, German idealism, and
Marx, as well as ideas from the sociology of knowledge and the social sciences.
Habermas believes that his theory makes it possible to develop a communication
community without alienation that is guided by reason in such a way that the
community can stand freely in regard to the objectively given reality. Critics
have pointed out that in order to make this theory work Habermas must
substantiate a number of assumptions that until now he has not been able to
justify.
contingent, neither
impossible nor necessary; i.e., both possible and non-necessary. The modal
property of being contingent is attributable to a proposition, state of
affairs, event, or more debatably an object. Muddles about the relationship
between this and other modal properties have abounded ever since Aristotle, who
initially conflated contingency with possibility but later realized that
something that is possible may also be necessary, whereas something that is
contingent cannot be necessary. Even today many philosophers are not clear
about the “opposition” between contingency and necessity, mistakenly supposing
them to be contradictory notions probably because within the domain of true
propositions the contingent and the necessary are indeed both exclusive and
exhaustive of one another. But the contradictory of ‘necessary’ is
‘non-necessary’; that of ‘contingent’ is ‘non-contingent’, as the following
extended modal square of opposition shows: Continental rationalism contingent
181 181 These logicosyntactical
relationships are preserved through various semantical interpretations, such as
those involving: a the logical modalities proposition P is logically contingent
just when P is neither a logical truth nor a logical falsehood; b the causal or
physical modalities state of affairs or event E is physically contingent just
when E is neither physically necessary nor physically impossible; and c the
deontic modalities act A is morally indeterminate just when A is neither
morally obligatory nor morally forbidden. In none of these cases does
‘contingent’ mean ‘dependent,’ as in the phrase ‘is contingent upon’. Yet just
such a notion of contingency seems to feature prominently in certain
formulations of the cosmological argument, all created objects being said to be
contingent beings and God alone to be a necessary or non-contingent being.
Conceptual clarity is not furthered by assimilating this sense of ‘contingent’
to the others.
continuum problem, an
open question that arose in Cantor’s theory of infinite cardinal numbers. By
definition, two sets have the same cardinal number if there is a one-to-one
correspondence between them. For example, the function that sends 0 to 0, 1 to
2, 2 to 4, etc., shows that the set of even natural numbers has the same
cardinal number as the set of all natural numbers, namely F0. That F0 is not
the only infinite cardinal follows from Cantor’s theorem: the power set of any
set i.e., the set of all its subsets has a greater cardinality than the set
itself. So, e.g., the power set of the natural numbers, i.e., the set of all
sets of natural numbers, has a cardinal number greater than F0. The first
infinite number greater than F0 is F1; the next after that is F2, and so on.
When arithmetical operations are extended into the infinite, the cardinal
number of the power set of the natural numbers turns out to be 2F0. By Cantor’s
theorem, 2F0 must be greater than F0; the conjecture that it is equal to F1 is
Cantor’s continuum hypothesis in symbols, CH or 2F0 % F1. Since 2F0 is also the
cardinality of the set of points on a continuous line, CH can also be stated in
this form: any infinite set of points on a line can be brought into one-to-one
correspondence either with the set of natural numbers or with the set of all
points on the line. Cantor and others attempted to prove CH, without success.
It later became clear, due to the work of Gödel and Cohen, that their failure
was inevitable: the continuum hypothesis can neither be proved nor disproved
from the axioms of set theory ZFC. The question of its truth or falsehood the continuum problem remains open.
contractarianism, a
family of moral and political theories that make use of the idea of a social
contract. Traditionally philosophers such as Hobbes and Locke used the social
contract idea to justify certain conceptions of the state. In the twentieth
century philosophers such as John Rawls have used the social contract notion to
define and defend moral conceptions both conceptions of political justice and
individual morality, often but not always doing so in addition to developing
social contract theories of the state. The term ‘contractarian’ most often applies
to this second type of theory. There are two kinds of moral argument that the
contract image has spawned, the first rooted in Hobbes and the second rooted in
Kant. Hobbesians start by insisting that what is valuable is what a person
desires or prefers, not what he ought to desire or prefer for no such
prescriptively powerful object exists; and rational action is action that
achieves or maximizes the satisfaccontingent being contractarianism 182 182 tion of desires or preferences. They go
on to insist that moral action is rational for a person to perform if and only
if such action advances the satisfaction of his desires or preferences. And
they argue that because moral action leads to peaceful and harmonious living
conducive to the satisfaction of almost everyone’s desires or preferences,
moral actions are rational for almost everyone and thus “mutually agreeable.”
But Hobbesians believe that, to ensure that no cooperative person becomes the
prey of immoral aggressors, moral actions must be the conventional norms in a
community, so that each person can expect that if she behaves cooperatively,
others will do so too. These conventions constitute the institution of morality
in a society. So the Hobbesian moral theory is committed to the idea that morality
is a human-made institution, which is justified only to the extent that it
effectively furthers human interests. Hobbesians explain the existence of
morality in society by appealing to the convention-creating activities of human
beings, while arguing that the justification of morality in any human society
depends upon how well its moral conventions serve individuals’ desires or
preferences. By considering “what we could agree to” if we reappraised and
redid the cooperative conventions in our society, we can determine the extent
to which our present conventions are “mutually agreeable” and so rational for
us to accept and act on. Thus, Hobbesians invoke both actual agreements or
rather, conventions and hypothetical agreements which involve considering what
conventions would be “mutually agreeable” at different points in their theory;
the former are what they believe our moral life consists in; the latter are
what they believe our moral life should consist in i.e., what our actual moral life should
model. So the notion of the contract does not do justificational work by itself
in the Hobbesian moral theory: this term is used only metaphorically. What we
“could agree to” has moral force for the Hobbesians not because make-believe
promises in hypothetical worlds have any binding force but because this sort of
agreement is a device that merely reveals how the agreed-upon outcome is
rational for all of us. In particular, thinking about “what we could all agree
to” allows us to construct a deduction of practical reason to determine what
policies are mutually advantageous. The second kind of contractarian theory is
derived from the moral theorizing of Kant. In his later writings Kant proposed
that the “idea” of the “Original Contract” could be used to determine what
policies for a society would be just. When Kant asks “What could people agree
to?,” he is not trying to justify actions or policies by invoking, in any
literal sense, the consent of the people. Only the consent of real people can
be legitimating, and Kant talks about hypothetical agreements made by
hypothetical people. But he does believe these make-believe agreements have
moral force for us because the process by which these people reach agreement is
morally revealing. Kant’s contracting process has been further developed by
subsequent philosophers, such as Rawls, who concentrates on defining the
hypothetical people who are supposed to make this agreement so that their
reasoning will not be tarnished by immorality, injustice, or prejudice, thus
ensuring that the outcome of their joint deliberations will be morally sound.
Those contractarians who disagree with Rawls define the contracting parties in
different ways, thereby getting different results. The Kantians’ social
contract is therefore a device used in their theorizing to reveal what is just
or what is moral. So like Hobbesians, their contract talk is really just a way
of reasoning that allows us to work out conceptual answers to moral problems.
But whereas the Hobbesians’ use of contract language expresses the fact that,
on their view, morality is a human invention which if it is well invented ought
to be mutually advantageous, the Kantians’ use of the contract language is
meant to show that moral principles and conceptions are provable theorems derived
from a morally revealing and authoritative reasoning process or “moral proof
procedure” that makes use of the social contract idea. Both kinds of
contractarian theory are individualistic, in the sense that they assume that
moral and political policies must be justified with respect to, and answer the
needs of, individuals. Accordingly, these theories have been criticized by
communitarian philosophers, who argue that moral and political policies can and
should be decided on the basis of what is best for a community. They are also
attacked by utilitarian theorists, whose criterion of morality is the
maximization of the utility of the community, and not the mutual satisfaction
of the needs or preferences of individuals. Contractarians respond that whereas
utilitarianism fails to take seriously the distinction between persons,
contractarian theories make moral and political policies answerable to the
legitimate interests and needs of individuals, which, contra the
communitarians, they take to be the starting point of moral theorizing.
contraposition, the
immediate logical operation on any categorical proposition that is accomplished
by first forming the complements of both the subject term and the predicate
term of that proposition and then interchanging these complemented terms. Thus,
contraposition applied to the categorical proposition ‘All cats are felines’
yields ‘All non-felines are non-cats’, where ‘nonfeline’ and ‘non-cat’ are,
respectively, the complements or complementary terms of ‘feline’ and ‘cat’. The
result of applying contraposition to a categorical proposition is said to be
the contrapositive of that proposition.
contraries, any pair of
propositions that cannot both be true but can both be false; derivatively, any
pair of properties that cannot both apply to a thing but that can both fail to
apply to a thing. Thus the propositions ‘This object is red all over’ and ‘This
object is green all over’ are contraries, as are the properties of being red
all over and being green all over. Traditionally, it was considered that the
categorical A-proposition ‘All S’s are P’s’ and the categorical E-proposition
‘No S’s are P’s’ were contraries; but according to De Morgan and most
subsequent logicians, these two propositions are both true when there are no
S’s at all, so that modern logicians do not usually regard the categorical A-
and E-propositions as being true contraries.
contravalid, designating
a proposition P in a logical system such that every proposition in the system
is a consequence of P. In most of the typical and familiar logical systems,
contravalidity coincides with self-contradictoriness.
control, an apparently
causal phenomenon closely akin to power and important for such topics as
intentional action, freedom, and moral responsibility. Depending upon the
control you had over the event, your finding a friend’s stolen car may or may
not be an intentional action, a free action, or an action for which you deserve
moral credit. Control seems to be a causal phenomenon. Try to imagine
controlling a car, say, without causing anything. If you cause nothing, you
have no effect on the car, and one does not control a thing on which one has no
effect. But control need not be causally deterministic. Even if a genuine
randomizer in your car’s steering mechanism gives you only a 99 percent chance
of making turns you try to make, you still have considerable control in that
sphere. Some philosophers claim that we have no control over anything if causal
determinism is true. That claim is false. When you drive your car, you normally
are in control of its speed and direction, even if our world happens to be
deterministic.
conventionalism, the
philosophical doctrine that logical truth and mathematical truth are created by
our choices, not dictated or imposed on us by the world. The doctrine is a more
specific version of the linguistic theory of logical and mathematical truth,
according to which the statements of logic and mathematics are true because of
the way people use language. Of course, any statement owes its truth to some
extent to facts about linguistic usage. For example, ‘Snow is white’ is true in
English because of the facts that 1 ‘snow’ denotes snow, 2 ‘is white’ is true
of white things, and 3 snow is white. What the linguistic theory asserts is
that statements of logic and mathematics owe their truth entirely to the way
people use language. Extralinguistic facts such as 3 are not relevant to the
truth of such statements. Which aspects of linguistic usage produce logical
truth contradiction conventionalism 184
184 and mathematical truth? The conventionalist answer is: certain
linguistic conventions. These conventions are said to include rules of
inference, axioms, and definitions. The idea that geometrical truth is truth we
create by adopting certain conventions received support by the discovery of
non-Euclidean geometries. Prior to this discovery, Euclidean geometry had been
seen as a paradigm of a priori knowledge. The further discovery that these
alternative systems are consistent made Euclidean geometry seem rejectable
without violating rationality. Whether we adopt the Euclidean system or a
non-Euclidean system seems to be a matter of our choice based on such pragmatic
considerations as simplicity and convenience. Moving to number theory,
conventionalism received a prima facie setback by the discovery that arithmetic
is incomplete if consistent. For let S be an undecidable sentence, i.e., a
sentence for which there is neither proof nor disproof. Suppose S is true. In
what conventions does its truth consist? Not axioms, rules of inference, and
definitions. For if its truth consisted in these items it would be provable.
Suppose S is not true. Then its negation must be true. In what conventions does
its truth consist? Again, no answer. It appears that if S is true or its
negation is true and if neither S nor its negation is provable, then not all
arithmetic truth is truth by convention. A response the conventionalist could
give is that neither S nor its negation is true if S is undecidable. That is,
the conventionalist could claim that arithmetic has truth-value gaps. As to
logic, all truths of classical logic are provable and, unlike the case of
number theory and geometry, axioms are dispensable. Rules of inference suffice.
As with geometry, there are alternatives to classical logic. The intuitionist,
e.g., does not accept the rule ‘From not-not-A infer A’. Even detachment ’From A, if A then B, infer B’ is rejected in some multivalued systems of
logic. These facts support the conventionalist doctrine that adopting any set
of rules of inference is a matter of our choice based on pragmatic
considerations. But the anti-conventionalist might respond consider a simple
logical truth such as ‘If Tom is tall, then Tom is tall’. Granted that this is
provable by rules of inference from the empty set of premises, why does it
follow that its truth is not imposed on us by extralinguistic facts about Tom?
If Tom is tall the sentence is true because its consequent is true. If Tom is
not tall the sentence is true because its antecedent is false. In either case
the sentence owes its truth to facts about Tom.
convention T, a criterion
of material adequacy of proposed truth definitions discovered, formally
articulated, adopted, and so named by Tarski in connection with his 1929
definition of the concept of truth in a formalized language. Convention T is
one of the most important of several independent proposals Tarski made
concerning philosophically sound and logically precise treatment of the concept
of truth. Various of these proposals have been criticized, but convention T has
remained virtually unchallenged and is regarded almost as an axiom of analytic
philosophy. To say that a proposed definition of an established concept is
materially adequate is to say that it is “neither too broad nor too narrow,”
i.e., that the concept it characterizes is coextensive with the established
concept. Since, as Tarski emphasized, for many formalized languages there are
no criteria of truth, it would seem that there can be no general criterion of
material adequacy of truth definitions. But Tarski brilliantly finessed this
obstacle by discovering a specification that is fulfilled by the established
correspondence concept of truth and that has the further property that any two
concepts fulfilling it are necessarily coextensive. Basically, convention T
requires that to be materially adequate a proposed truth definition must imply
all of the infinitely many relevant Tarskian biconditionals; e.g., the sentence
‘Some perfect number is odd’ is true if and only if some perfect number is odd.
Loosely speaking, a Tarskian biconditional for English is a sentence obtained
from the form ‘The sentence ——— is true if and only if ——’ by filling the right
blank with a sentence and filling the left blank with a name of the sentence.
Tarski called these biconditionals “equivalences of the form T” and referred to
the form as a “scheme.” Later writers also refer to the form as “schema
T.”
conventionalism, ethical
conversational implicature 185 185
converse. 1 Narrowly, the result of the immediate logical operation called
conversion on any categorical proposition, accomplished by interchanging the
subject term and the predicate term of that proposition. Thus, the converse of
the categorical proposition ‘All cats are felines’ is ‘All felines are cats’. 2
More broadly, the proposition obtained from a given ‘if . . . then . . .’
conditional proposition by interchanging the antecedent and the consequent
clauses, i.e., the propositions following the ‘if’ and the ‘then’, respectively;
also, the argument obtained from an argument of the form ‘P; therefore Q’ by
interchanging the premise and the conclusion.
converse, outer and
inner, respectively, the result of “converting” the two “terms” or the relation
verb of a relational sentence. The outer converse of ‘Abe helps Ben’ is ‘Ben
helps Abe’ and the inner converse is ‘Abe is helped by Ben’. In simple, or
atomic, sentences the outer and inner converses express logically equivalent
propositions, and thus in these cases no informational ambiguity arises from
the adjunction of ‘and conversely’ or ‘but not conversely’, despite the fact
that such adjunction does not indicate which, if either, of the two converses
intended is meant. However, in complex, or quantified, relational sentences
such as ‘Every integer precedes some integer’ genuine informational ambiguity
is produced. Under normal interpretations of the respective sentences, the
outer converse expresses the false proposition that some integer precedes every
integer, the inner converse expresses the true proposition that every integer
is preceded by some integer. More complicated considerations apply in cases of
quantified doubly relational sentences such as ‘Every integer precedes every
integer exceeding it’. The concept of scope explains such structural ambiguity:
in the sentence ‘Every integer precedes some integer and conversely’,
‘conversely’ taken in the outer sense has wide scope, whereas taken in the
inner sense it has narrow scope.
Conway, Anne c.163079,
English philosopher whose Principia philosophiae antiquissimae et recentissimae
1690; English translation, The Principles of the Most Ancient and Modern
Philosophy, 1692 proposes a monistic ontology in which all created things are
modes of one spiritual substance emanating from God. This substance is made up
of an infinite number of hierarchically arranged spirits, which she calls
monads. Matter is congealed spirit. Motion is conceived not dynamically but
vitally. Lady Conway’s scheme entails a moral explanation of pain and the
possibility of universal salvation. She repudiates the dualism of both
Descartes and her teacher, Henry More, as well as the materialism of Hobbes and
Spinoza. The work shows the influence of cabalism and affinities with the
thought of the mentor of her last years, Francis Mercurius van Helmont, through
whom her philosophy became known to Leibniz.
copula, in logic, a form
of the verb ‘to be’ that joins subject and predicate in singular and
categorical propositions. In ‘George is wealthy’ and ‘Swans are beautiful’,
e.g., ‘is’ and ‘are’, respectively, are copulas. Not all occurrences of forms
of ‘be’ count as copulas. In sentences such as ‘There are 51 states’, ‘are’ is
not a copula, since it does not join a subject and a predicate, but occurs simply
as a part of the quantifier term ‘there are’.
Cordemoy, Géraud de
162684, French philosopher and member of the Cartesian school. His most
important work is his Le discernement du corps et de l’âme en six discours,
published in 1666 and reprinted under slightly different titles a number of
times thereafter. Also important are the Discours physique de la parole 1668, a
Cartesian theory of language and communication; and Une lettre écrite à un
sçavant religieux 1668, a defense of Descartes’s orthodoxy on certain questions
in natural philosophy. Cordemoy also wrote a history of France, left incomplete
at his death. Like Descartes, Cordemoy advocated a mechanistic physics
explaining physical phenomena in terms of size, shape, and local motion, and
converse Cordemoy, Géraud de 186 186
held that minds are incorporeal thinking substances. Like most Cartesians,
Cordemoy also advocated a version of occasionalism. But unlike other
Cartesians, he argued for atomism and admitted the void. These innovations were
not welcomed by other members of the Cartesian school. But Cordemoy is often
cited by later thinkers, such as Leibniz, as an important seventeenth-century
advocate of atomism.
corners, also called
corner quotes, quasi-quotes, a notational device ] ^ introduced by Quine
Mathematical Logic, 1940 to provide a conveniently brief way of speaking
generally about unspecified expressions of such and such kind. For example, a
logician might want a conveniently brief way of saying in the metalanguage that
the result of writing a wedge ‘7’ the dyadic logical connective for a
truth-functional use of ‘or’ between any two well-formed formulas wffs in the
object language is itself a wff. Supposing the Grecian letters ‘f’ and ‘y’
available in the metalanguage as variables ranging over wffs in the object
language, it is tempting to think that the formation rule stated above can be
succinctly expressed simply by saying that if f and y are wffs, then ‘f 7 y’ is
a wff. But this will not do, for ‘f 7 y’ is not a wff. Rather, it is a hybrid
expression of two variables of the metalanguage and a dyadic logical connective
of the object language. The problem is that putting quotation marks around the Grecian
letters merely results in designating those letters themselves, not, as desired,
in designating the context of the unspecified wffs. Quine’s device of corners
allows one to transcend this limitation of straight quotation since
quasi-quotation, e.g., ]f 7 y^, amounts to quoting the constant contextual
background, ‘# 7 #’, and imagining the unspecified expressions f and y written
in the blanks.
corresponding conditional
of a given argument, any conditional whose antecedent is a logical conjunction
of all of the premises of the argument and whose consequent is the conclusion.
The two conditionals, ‘if Abe is Ben and Ben is wise, then Abe is wise’ and ‘if
Ben is wise and Abe is Ben, then Abe is wise’, are the two corresponding
conditionals of the argument whose premises are ‘Abe is Ben’ and ‘Ben is wise’
and whose conclusion is ‘Abe is wise’. For a one-premise argument, the
corresponding conditional is the conditional whose antecedent is the premise
and whose consequent is the conclusion. The limiting cases of the empty and
infinite premise sets are treated in different ways by different logicians; one
simple treatment considers such arguments as lacking corresponding
conditionals. The principle of corresponding conditionals is that in order for
an argument to be valid it is necessary and sufficient for all its
corresponding conditionals to be tautological. The commonly used expression
‘the corresponding conditional of an argument’ is also used when two further
stipulations are in force: first, that an argument is construed as having an
ordered sequence of premises rather than an unordered set of premises; second,
that conjunction is construed as a polyadic operation that produces in a unique
way a single premise from a sequence of premises rather than as a dyadic
operation that combines premises two by two. Under these stipulations the principle
of the corresponding conditional is that in order for an argument to be valid
it is necessary and sufficient for its corresponding conditional to be valid.
These principles are closely related to modus ponens, to conditional proof, and
to the so-called deduction theorem.
counterfactuals, also
called contrary-to-fact conditionals, subjunctive conditionals that
presupcorner quotes counterfactuals 187
187 pose the falsity of their antecedents, such as ‘If Hitler had
invaded England, Germany would have won’ and ‘If I were you, I’d run’.
Conditionals or hypothetical statements are compound statements of the form ‘If
p, then q’, or equivalently ‘q if p’. Component p is described as the
antecedent protasis and q as the consequent apodosis. A conditional like ‘If
Oswald did not kill Kennedy, then someone else did’ is called indicative,
because both the antecedent and consequent are in the indicative mood. One like
‘If Oswald had not killed Kennedy, then someone else would have’ is
subjunctive. Many subjunctive and all indicative conditionals are open,
presupposing nothing about the antecedent. Unlike ‘If Bob had won, he’d be
rich’, neither ‘If Bob should have won, he would be rich’ nor ‘If Bob won, he
is rich’ implies that Bob did not win. Counterfactuals presuppose, rather than
assert, the falsity of their antecedents. ‘If Reagan had been president, he
would have been famous’ seems inappropriate and out of place, but not false,
given that Reagan was president. The difference between counterfactual and open
subjunctives is less important logically than that between subjunctives and
indicatives. Whereas the indicative conditional about Kennedy is true, the
subjunctive is probably false. Replace ‘someone’ with ‘no one’ and the
truth-values reverse. The most interesting logical feature of counterfactuals
is that they are not truth-functional. A truth-functional compound is one whose
truth-value is completely determined in every possible case by the truth-values
of its components. For example, the falsity of ‘The President is a grandmother’
and ‘The President is childless’ logically entails the falsity of ‘The
President is a grandmother and childless’: all conjunctions with false
conjuncts are false. But whereas ‘If the President were a grandmother, the
President would be childless’ is false, other counterfactuals with equally
false components are true, such as ‘If the President were a grandmother, the
President would be a mother’. The truth-value of a counterfactual is determined
in part by the specific content of its components. This property is shared by
indicative and subjunctive conditionals generally, as can be seen by varying
the wording of the example. In marked contrast, the material conditional, p /
q, of modern logic, defined as meaning that either p is false or q is true, is
completely truth-functional. ‘The President is a grandmother / The President is
childless’ is just as true as ‘The President is a grandmother / The President
is a mother’. While stronger than the material conditional, the counterfactual
is weaker than the strict conditional, p U q, of modern modal logic, which says
that p / q is necessarily true. ‘If the switch had been flipped, the light
would be on’ may in fact be true even though it is possible for the switch to
have been flipped without the light’s being on because the bulb could have
burned out. The fact that counterfactuals are neither strict nor material
conditionals generated the problem of counterfactual conditionals raised by
Chisholm and Goodman: What are the truth conditions of a counterfactual, and
how are they determined by its components? According to the “metalinguistic”
approach, which resembles the deductive-nomological model of explanation, a
counterfactual is true when its antecedent conjoined with laws of nature and statements
of background conditions logically entails its consequent. On this account, ‘If
the switch had been flipped the light would be on’ is true because the
statement that the switch was flipped, plus the laws of electricity and
statements describing the condition and arrangement of the circuitry, entail
that the light is on. The main problem is to specify which facts are “fixed”
for any given counterfactual and context. The background conditions cannot
include the denials of the antecedent or the consequent, even though they are
true, nor anything else that would not be true if the antecedent were.
Counteridenticals, whose antecedents assert identities, highlight the
difficulty: the background for ‘If I were you, I’d run’ must include facts
about my character and your situation, but not vice versa. Counterlegals like
‘Newton’s laws would fail if planets had rectangular orbits’, whose antecedents
deny laws of nature, show that even the set of laws cannot be all-inclusive.
Another leading approach pioneered by Robert C. Stalnaker and David K. Lewis
extends the possible worlds semantics developed for modal logic, saying that a
counterfactual is true when its consequent is true in the nearest possible
world in which the antecedent is true. The counterfactual about the switch is
true on this account provided a world in which the switch was flipped and the
light is on is closer to the actual world than one in which the switch was
flipped but the light is not on. The main problem is to specify which world is
nearest for any given counterfactual and context. The difference between
indicative and subjunctive conditionals can be accounted for in terms of either
a different set of background conditions or a different measure of nearness.
counterfactuals counterfactuals 188 188
Counterfactuals turn up in a variety of philosophical contexts. To distinguish
laws like ‘All copper conducts’ from equally true generalizations like
‘Everything in my pocket conducts’, some have observed that while anything
would conduct if it were copper, not everything would conduct if it were in my
pocket. And to have a disposition like solubility, it does not suffice to be
either dissolving or not in water: it must in addition be true that the object
would dissolve if it were in water. It has similarly been suggested that one
event is the cause of another only if the latter would not have occurred if the
former had not; that an action is free only if the agent could or would have
done otherwise if he had wanted to; that a person is in a particular mental
state only if he would behave in certain ways given certain stimuli; and that
an action is right only if a completely rational and fully informed agent would
choose it.
counterinstance, also
called counterexample. 1 A particular instance of an argument form that has all
true premises but a false conclusion, thereby showing that the form is not
universally valid. The argument form ‘p 7 q, - p / , ~q’, for example, is shown
to be invalid by the counterinstance ‘Grass is either red or green; Grass is
not red; Therefore, grass is not green’. 2 A particular false instance of a
statement form, which demonstrates that the form is not a logical truth. A
counterinstance to the form ‘p 7 q / p’, for example, would be the statement
‘If grass is either red or green, then grass is red’. 3 A particular example
that demonstrates that a universal generalization is false. The universal
statement ‘All large cities in the United States are east of the Mississippi’
is shown to be false by the counterinstance of San Francisco, which is a large
city in the United States that is not east of the Mississippi. V.K. counterpart
theory, a theory that analyzes statements about what is possible and impossible
for individuals statements of de re modality in terms of what holds of
counterparts of those individuals in other possible worlds, a thing’s
counterparts being individuals that resemble it without being identical with
it. The name ‘counterpart theory’ was coined by David Lewis, the theory’s
principal exponent. Whereas some theories analyze ‘Mrs. Simpson might have been
queen of England’ as ‘In some possible world, Mrs. Simpson is queen of
England’, counterpart theory analyzes it as ‘In some possible world, a
counterpart of Mrs. Simpson is queen of a counterpart of England’. The chief
motivation for counterpart theory is a combination of two views: a de re
modality should be given a possible worlds analysis, and b each actual
individual exists only in the actual world, and hence cannot exist with
different properties in other possible worlds. Counterpart theory provides an
analysis that allows ‘Mrs. Simpson might have been queen’ to be true compatibly
with a and b. For Mrs. Simpson’s counterparts in other possible worlds, in
those worlds where she herself does not exist, may have regal properties that
the actual Mrs. Simpson lacks. Counterpart theory is perhaps prefigured in
Leibniz’s theory of possibility.
count noun, a noun that
can occur syntactically a with quantifiers ‘each’, ‘every’, ‘many’, ‘few’,
‘several’, and numerals; b with the indefinite article, ‘an’; and c in the
plural form. The following are examples of count nouns CNs, paired with
semantically similar mass nouns MNs: ‘each dollar / silver’, ‘one composition /
music’, ‘a bed / furniture’, ‘instructions / advice’. MNs but not CNs can occur
with the quantifiers ‘much’ and ‘little’: ‘much poetry / poems’, ‘little bread
/ loaf’. Both CNs and MNs may occur with ‘all’, ‘most’, and ‘some’.
Semantically, CNs but not MNs refer distributively, providing a counting criterion.
It makes sense to ask how many CNs?: ‘How many coins / gold?’ MNs but not CNs
refer collectively. It makes sense to ask how much MN?: ‘How much gold /
coins?’ One problem is that these syntactic and semantic criteria yield
different classifications; another problem is to provide logical forms and
truth conditions for sentences containing mass nouns.
Cournot, Antoine-Augustin
180177, French mathematician and economist. A critical realist in scientific
and philosophical matters, he was a conservative in religion and politics. His
Researches into the Mathematical Principles of the Theory of Wealth 1838,
though a fiasco at the time, pioneered mathematical economics. Cournot upheld a
position midway between science and metaphysics. His philosophy rests on three
basic counteridenticals Cournot, Antoine-Augustin 189 189 concepts: order, chance, and
probability. The Exposition of the Theory of Chances and Probabilities 1843
focuses on the calculus of probability, unfolds a theory of chance occurrences,
and distinguishes among objective, subjective, and philosophical probability.
The Essay on the Foundations of Knowledge 1861 defines science as logically
organized knowledge. Cournot developed a probabilist epistemology, showed the
relevance of probabilism to the scientific study of human acts, and further
assumed the existence of a providential and complex order undergirding the
universe. Materialism, Vitalism, Rationalism 1875 acknowledges transrationalism
and makes room for finality, purpose, and God.
Cousin, Victor 17921867,
French philosopher who set out to merge the French psychological tradition with
the pragmatism of Locke and Condillac and the inspiration of the Scottish Reid,
Stewart and German idealists Kant, Hegel. His early courses at the Sorbonne
1815 18, on “absolute” values that might overcome materialism and skepticism,
aroused immense enthusiasm. The course of 1818, Du Vrai, du Beau et du Bien Of
the True, the Beautiful, and the Good, is preserved in the Adolphe Garnier
edition of student notes 1836; other early texts appeared in the Fragments
philosophiques Philosophical Fragments, 1826. Dismissed from his teaching post
as a liberal 1820, arrested in Germany at the request of the French police and
detained in Berlin, he was released after Hegel intervened 1824; he was not
reinstated until 1828. Under Louis-Philippe, he rose to highest honors, became
minister of education, and introduced philosophy into the curriculum. His
eclecticism, transformed into a spiritualism and cult of the “juste milieu,”
became the official philosophy. Cousin rewrote his work accordingly and even
succeeded in having Du Vrai third edition, 1853 removed from the papal index.
In 1848 he was forced to retire. He is noted for his educational reforms, as a
historian of philosophy, and for his translations Proclus, Plato, editions
Descartes, and portraits of ladies of seventeenth-century society.
Couturat, Louis 18681914,
French philosopher and logician who wrote on the history of philosophy, logic,
philosophy of mathematics, and the possibility of a universal language.
Couturat refuted Renouvier’s finitism and advocated an actual infinite in The
Mathematical Infinite 1896. He argued that the assumption of infinite numbers
was indispensable to maintain the continuity of magnitudes. He saw a precursor
of modern logistic in Leibniz, basing his interpretation of Leibniz on the
Discourse on Metaphysics and Leibniz’s correspondence with Arnauld. His
epoch-making Leibniz’s Logic 1901 describes Leibniz’s metaphysics as panlogism.
Couturat published a study on Kant’s mathematical philosophy Revue de
Métaphysique, 1904, and defended Peano’s logic, Whitehead’s algebra, and
Russell’s logistic in The Algebra of Logic 1905. He also contributed to André
Lalande’s Vocabulaire technique et critique de la philosophie 1926. J.-L.S.
covering law model, the view of scientific explanation as a deductive argument
which contains non-vacuously at least one universal law among its premises. The
names of this view include ‘Hempel’s model’, ‘Hempel-Oppenheim HO model’,
‘Popper-Hempel model’, ‘deductivenomological D-N model’, and the ‘subsumption
theory’ of explanation. The term ‘covering law model of explanation’ was
proposed by William Dray. The theory of scientific explanation was first
developed by Aristotle. He suggested that science proceeds from mere knowing
that to deeper knowing why by giving understanding of different things by the
four types of causes. Answers to why-questions are given by scientific
syllogisms, i.e., by deductive arguments with premises that are necessarily
true and causes of their consequences. Typical examples are the “subsumptive”
arguments that can be expressed by the Barbara syllogism: All ravens are black.
Jack is a raven. Therefore, Jack is black. Plants containing chlorophyll are
green. Grass contains chlorophyll. Therefore, grass is green. In modern logical
notation, An explanatory argument was later called in Grecian synthesis, in
Latin compositio or demonstratio propter quid. After the seventeenth century,
the Cousin, Victor covering law model 190
190 terms ‘explication’ and ‘explanation’ became commonly used. The
nineteenth-century empiricists accepted Hume’s criticism of Aristotelian
essences and necessities: a law of nature is an extensional statement that
expresses a uniformity, i.e., a constant conjunction between properties ‘All
swans are white’ or types of events ‘Lightning is always followed by thunder’.
Still, they accepted the subsumption theory of explanation: “An individual fact
is said to be explained by pointing out its cause, that is, by stating the law
or laws of causation, of which its production is an instance,” and “a law or
uniformity in nature is said to be explained when another law or laws are
pointed out, of which that law itself is but a case, and from which it could be
deduced” J. S. Mill. A general model of probabilistic explanation, with
deductive explanation as a specific case, was given by Peirce in 1883. A modern
formulation of the subsumption theory was given by Hempel and Paul Oppenheim in
1948 by the following schema of D-N explanation: Explanandum E is here a
sentence that describes a known particular event or fact singular explanation
or uniformity explanation of laws. Explanation is an argument that answers an
explanation-seeking why-question ‘Why E?’ by showing that E is nomically
expectable on the basis of general laws r M 1 and antecedent conditions. The
relation between the explanans and the explanandum is logical deduction.
Explanation is distinguished from other kinds of scientific systematization
prediction, postdiction that share its logical characteristics a view often called the symmetry thesis
regarding explanation and prediction by
the presupposition that the phenomenon E is already known. This also separates
explanations from reason-seeking arguments that answer questions of the form
‘What reasons are there for believing that E?’ Hempel and Oppenheim required
that the explanans have empirical content, i.e., be testable by experiment or
observation, and it must be true. If the strong condition of truth is dropped,
we speak of potential explanation. Dispositional explanations, for
non-probabilistic dispositions, can be formulated in the D-N model. For
example, let Hx % ‘x is hit by hammer’, Bx % ‘x breaks’, and Dx % ‘x is fragile’.
Then the explanation why a piece of glass was broken may refer to its fragility
and its being hit: It is easy to find examples of HO explanations that are not
satisfactory: self-explanations ‘Grass is green, because grass is green’,
explanations with too weak premises ‘John died, because he had a heart attack
or his plane crashed’, and explanations with irrelevant information ‘This stuff
dissolves in water, because it is sugar produced in Finland’. Attempts at
finding necessary and sufficient conditions in syntactic and semantic terms for
acceptable explanations have not led to any agreement. The HO model also needs
the additional Aristotelian condition that causal explanation is directed from
causes to effects. This is shown by Sylvain Bromberger’s flagpole example: the
length of a flagpole explains the length of its shadow, but not vice versa.
Michael Scriven has argued against Hempel that eaplanations of particular
events should be given by singular causal statements ‘E because C’. However, a
regularity theory Humean or stronger than Humean of causality implies that the
truth of such a singular causal statement presupposes a universal law of the
form ‘Events of type C are universally followed by events of type E’. The HO
version of the covering law model can be generalized in several directions. The
explanans may contain probabilistic or statistical laws. The
explanans-explanandum relation may be inductive in this case the explanation
itself is inductive. This gives us four types of explanations: deductive-universal
i.e., D-N, deductiveprobabilistic, inductive-universal, and
inductiveprobabilistic I-P. Hempel’s 1962 model for I-P explanation contains a
probabilistic covering law PG/F % r, where r is the statistical probability of
G given F, and r in brackets is the inductive probability of the explanandum
given the explanans: The explanation-seeking question may be weakened from ‘Why
necessarily E?’ to ‘How possibly E?’. In a corrective explanation, the
explanatory answer points out that the explanandum sentence E is not strictly
true. This is the case in approximate explanation e.g., Newton’s theory entails
a corrected form of Galileo’s and Kepler’s laws.
Craig’s interpolation
theorem, a theorem for firstorder logic: if a sentence y of first-order logic
entails a sentence q there is an “interpolant,” a sentence F in the vocabulary
common to q and y that entails q and is entailed by y. Originally, William
Craig proved his theorem in 1957 as a lemma, to give a simpler proof of Beth’s
definability theorem, but the result now stands on its own. In abstract model
theory, logics for which an interpolation theorem holds are said to have the
Craig interpolation property. Craig’s interpolation theorem shows that
first-order logic is closed under implicit definability, so that the concepts
embodied in first-order logic are all given explicitly. In the philosophy of
science literature ‘Craig’s theorem’ usually refers to another result of
Craig’s: that any recursively enumerable set of sentences of first-order logic can
be axiomatized. This has been used to argue that theoretical terms are in
principle eliminable from empirical theories. Assuming that an empirical theory
can be axiomatized in first-order logic, i.e., that there is a recursive set of
first-order sentences from which all theorems of the theory can be proven, it
follows that the set of consequences of the axioms in an “observational”
sublanguage is a recursively enumerable set. Thus, by Craig’s theorem, there is
a set of axioms for this subtheory, the Craig-reduct, that contains only
observation terms. Interestingly, the Craig-reduct theory may be semantically
weaker, in the sense that it may have models that cannot be extended to a model
of the full theory. The existence of such a model would prove that the
theoretical terms cannot all be defined on the basis of the observational
vocabulary only, a result related to Beth’s definability theorem.
creation ex nihilo, the
act of bringing something into existence from nothing. According to traditional
Christian theology, God created the world ex nihilo. To say that the world was
created from nothing does not mean that there was a prior non-existent
substance out of which it was fashioned, but rather that there was not anything
out of which God brought it into being. However, some of the patristics
influenced by Plotinus, such as Gregory of Nyssa, apparently understood
creation ex nihilo to be an emanation from God according to which what is
created comes, not from nothing, but from God himself. Not everything that God
makes need be created ex nihilo; or if, as in Genesis 2: 7, 19, God made a
human being and animals from the ground, a previously existing material, God
did not create them from nothing. Regardless of how bodies are made, orthodox
theology holds that human souls are created ex nihilo; the opposing view,
traducianism, holds that souls are propagated along with bodies.
creationism, acceptance
of the early chapters of Genesis taken literally. Genesis claims that the
universe and all of its living creatures including humans were created by God
in the space of six days. The need to find some way of reconciling this story
with the claims of science intensified in the nineteenth century, with the
publication of Darwin’s Origin of Species 1859. In the Southern states of the
United States, the indigenous form of evangelical Protestant Christianity
declared total opposition to evolutionism, refusing any attempt at
reconciliation, and affirming total commitment to a literal “creationist”
reading of the Bible. Because of this, certain states passed laws banning the
teaching of evolutionism. More recently, literalists have argued that the Bible
can be given full scientific backing, and they have therefore argued that
“Creation science” may properly be taught in state-supported schools in the
United States without violation of the constitutional separation of church and
state. This claim was challenged in the state of Arkansas in 1981, and
ultimately rejected by the U.S. Supreme Court. The creationism dispute has raised
some issues of philosophical interest and importance. Most obviously, there is
the question of what constitutes a genuine science. Is there an adequate criterion
of demarcation between science and nonscience, and will it put evolutionism on
the one side and creationism on the other? Some philosophers, arguing in the
spirit of Karl Popper, think that such a criterion can be found. Others are not
so sure; and yet others think that some such criterion can be found, but shows
creationism to be genuine science, albeit already proven false. Philosophers of
education have also taken an interest in creationism and what it represents. If
one grants that even the most orthodox science may contain a value component,
reflecting and influencing its practitioners’ culture, then teaching a subject
like biology almost certainly is not a normatively neutral enterprise. In that
case, without necessarily conceding to the creationist anything about the true
nature of science or values, perhaps one must agree that science with its
teaching is not something that can and should be set apart from the rest of
society, as an entirely distinct phenomenon.
Crescas, Hasdai d.1412,
Spanish Jewish philosopher, theologian, and statesman. He was a well-known
representative of the Jewish community in both Barcelona and Saragossa.
Following the death of his son in the anti-Jewish riots of 1391, he wrote a
chronicle of the massacres published as an appendix to Ibn Verga, Shevet
Yehudah, ed. M. Wiener, 1855. Crescas’s devotion to protecting Spanish Jewry in
a time when conversion was encouraged is documented in one extant work, the
Refutation of Christian Dogmas 139798, found in the 1451 Hebrew translation of
Joseph ibn Shem Tov Bittul ’Iqqarey ha-Nofrim. His major philosophical work, Or
Adonai The Light of the Lord, was intended as the first of a two-part project
that was to include his own more extensive systematization of halakha Jewish
law as well as a critique of Maimonides’ work. But this second part, “Lamp of
the Divine Commandment,” was never written. Or Adonai is a
philosophico-dogmatic response to and attack on the Aristotelian doctrines that
Crescas saw as a threat to the Jewish faith, doctrines concerning the nature of
God, space, time, place, free will, and infinity. For theological reasons he
attempts to refute basic tenets in Aristotelian physics. He offers, e.g., a
critique of Aristotle’s arguments against the existence of a vacuum. The
Aristotelian view of time is rejected as well. Time, like space, is thought by
Crescas to be infinite. Furthermore, it is not an accident of motion, but
rather exists only in the soul. In defending the fundamental doctrines of the
Torah, Crescas must address the question discussed by his predecessors
Maimonides and Gersonides, namely that of reconciling divine foreknowledge with
human freedom. Unlike these two thinkers, Crescas adopts a form of determinism,
arguing that God knows both the possible and what will necessarily take place.
An act is contingent with respect to itself, and necessary with respect to its
causes and God’s knowledge. To be willed freely, then, is not for an act to be
absolutely contingent, but rather for it to be “willed internally” as opposed
to “willed externally.” Reactions to Crescas’s doctrines were mixed. Isaac
Abrabanel, despite his respect for Crescas’s piety, rejected his views as
either “unintelligible” or “simple-minded.” On the other hand, Giovanni Pico
della Mirandola appeals to Crescas’s critique of Aristotelian physics; Judah
Abrabanel’s Dialogues of Love may be seen as accommodating Crescas’s
metaphysical views; and Spinoza’s notions of necessity, freedom, and extension
may well be influenced by the doctrines of Or Adonai.
criterion, broadly, a
sufficient condition for the presence of a certain property or for the truth of
a certain proposition. Generally, a criterion need be sufficient merely in
normal circumstances rather than absolutely sufficient. Typically, a criterion
is salient in some way, often by virtue of being a necessary condition as well
as a sufficient one. The plural form, ‘criteria’, is commonly used for a set of
singly necessary and jointly sufficient conditions. A set of truth conditions
is said to be criterial for the truth of propositions of a certain form. A
conceptual analysis of a philosophically important concept may take the form of
a proposed set of truth conditions for paradigmatic propositions containing the
concept in question. Philosophers have proposed criteria for such notions as
meaningfulness, intentionality, creationism, theological criterion 193 193 knowledge, justification, justice,
rightness, and identity including personal identity and event identity, among
many others. There is a special use of the term in connection with Wittgenstein’s
well-known remark that “an ‘inner process’ stands in need of outward criteria,”
e.g., moans and groans for aches and pains. The suggestion is that a
criteriological connection is needed to forge a conceptual link between items
of a sort that are intelligible and knowable to items of a sort that, but for
the connection, would not be intelligible or knowable. A mere symptom cannot
provide such a connection, for establishing a correlation between a symptom and
that for which it is a symptom presupposes that the latter is intelligible and
knowable. One objection to a criteriological view, whether about aches or
quarks, is that it clashes with realism about entities of the sort in question
and lapses into, as the case may be, behaviorism or instrumentalism. For it
seems that to posit a criteriological connection is to suppose that the nature
and existence of entities of a given sort can depend on the conditions for
their intelligibility or knowability, and that is to put the epistemological
cart before the ontological horse.
.
critical legal studies, a
loose assemblage of legal writings and thinkers in the United States and Great
Britain since the mid-1970s that aspire to a jurisprudence and a political
ideology. Like the American legal realists of the 1920s and 1930s, the
jurisprudential program is largely negative, consisting in the discovery of
supposed contradictions within both the law as a whole and areas of law such as
contracts and criminal law. The jurisprudential implication derived from such
supposed contradictions within the law is that any decision in any case can be
defended as following logically from some authoritative propositions of law,
making the law completely without guidance in particular cases. Also like the
American legal realists, the political ideology of critical legal studies is
vaguely leftist, embracing the communitarian critique of liberalism.
Communitarians fault liberalism for its alleged overemphasis on individual
rights and individual welfare at the expense of the intrinsic value of certain
collective goods. Given the cognitive relativism of many of its practitioners,
critical legal studies tends not to aspire to have anything that could be
called a theory of either law or of politics.
Critical Realism, a
philosophy that at the highest level of generality purports to integrate the
positive insights of both New Realism and idealism. New Realism was the first
wave of realistic reaction to the dominant idealism of the nineteenth century.
It was a version of immediate and direct realism. In its attempt to avoid any representationalism
that would lead to idealism, this tradition identified the immediate data of
consciousness with objects in the physical world. There is no intermediary
between the knower and the known. This heroic tour de force foundered on the
phenomena of error, illusion, and perceptual variation, and gave rise to a
successor realism Critical Realism that acknowledged the mediation of “the
mental” in our cognitive grasp of the physical world. ’Critical Realism’ was
the title of a work in epistemology by Roy Wood Sellars 1916, but its more
general use to designate the broader movement derives from the 1920 cooperative
volume, Essays in Critical Realism: A Cooperative Study of the Problem of
Knowledge, containing position papers by Durant Drake, A. O. Lovejoy, J. B.
Pratt, A. K. Rogers, C. A. Strong, George Santayana, and Roy Wood Sellars. With
New Realism, Critical Realism maintains that the primary object of knowledge is
the independent physical world, and that what is immediately present to
consciousness is not the physical object as such, but some corresponding mental
state broadly construed. Whereas both New Realism and idealism grew out of the
conviction that any such mediated account of knowledge is untenable, the
Critical Realists felt that only if knowledge of the external world is
explained in terms of a process of mental mediation, can error, illusion, and
perceptual variation be accommodated. One could fashion an account of mental
mediation that did not involve the pitfalls of Lockean representationalism by
carefully distinguishing between the object known and the mental state through
which it is known. The Critical Realists differed among themselves both
epistemologically and metaphysically. The mediating elements in cognition were
variously construed as essences, ideas, or sensedata, and the precise role of
these items in cognicriterion, problem of the Critical Realism 194 194 tion was again variously construed.
Metaphysically, some were dualists who saw knowledge as unexplainable in terms
of physical processes, whereas others principally Santayana and Sellars were
materialists who saw cognition as simply a function of conscious biological
systems. The position of most lasting influence was probably that of Sellars
because that torch was taken up by his son, Wilfrid, whose very sophisticated
development of it was quite influential.
critical theory, any
social theory that is at the same time explanatory, normative, practical, and
self-reflexive. The term was first developed by Horkheimer as a self-description
of the Frankfurt School and its revision of Marxism. It now has a wider
significance to include any critical, theoretical approach, including feminism
and liberation philosophy. When they make claims to be scientific, such
approaches attempt to give rigorous explanations of the causes of oppression,
such as ideological beliefs or economic dependence; these explanations must in
turn be verified by empirical evidence and employ the best available social and
economic theories. Such explanations are also normative and critical, since
they imply negative evaluations of current social practices. The explanations
are also practical, in that they provide a better self-understanding for agents
who may want to improve the social conditions that the theory negatively
evaluates. Such change generally aims at “emancipation,” and theoretical
insight empowers agents to remove limits to human freedom and the causes of
human suffering. Finally, these theories must also be self-reflexive: they must
account for their own conditions of possibility and for their potentially
transformative effects. These requirements contradict the standard account of
scientific theories and explanations, particularly positivism and its
separation of fact and value. For this reason, the methodological writings of
critical theorists often attack positivism and empiricism and attempt to
construct alternative epistemologies. Critical theorists also reject
relativism, since the cultural relativity of norms would undermine the basis of
critical evaluation of social practices and emancipatory change. The difference
between critical and non-critical theories can be illustrated by contrasting
the Marxian and Mannheimian theories of ideology. Whereas Mannheim’s theory
merely describes relations between ideas of social conditions, Marx’s theory
tries to show how certain social practices require false beliefs about them by
their participants. Marx’s theory not only explains why this is so, it also
negatively evaluates those practices; it is practical in that by disillusioning
participants, it makes them capable of transformative action. It is also
self-reflexive, since it shows why some practices require illusions and others
do not, and also why social crises and conflicts will lead agents to change
their circumstances. It is scientific, in that it appeals to historical
evidence and can be revised in light of better theories of social action,
language, and rationality. Marx also claimed that his theory was superior for
its special “dialectical method,” but this is now disputed by most critical
theorists, who incorporate many different theories and methods. This broader
definition of critical theory, however, leaves a gap between theory and
practice and places an extra burden on critics to justify their critical
theories without appeal to such notions as inevitable historical progress. This
problem has made critical theories more philosophical and concerned with
questions of justification.
Croce, Benedetto
18661952, Italian philosopher. He was born at Pescasseroli, in the Abruzzi, and
after 1886 lived in Naples. He briefly attended the of Rome and was led to study Herbart’s
philosophy. In 1904 he founded the influential journal La critica. In 1910 he
was made life member of the Italian senate. Early in his career he befriended
Giovanni Gentile, but this friendship was breached by Gentile’s Fascism. During
the Fascist period and World War II Croce lived in isolation as the chief
anti-fascist thinker in Italy. He later became a leader of the Liberal party
and at the age of eighty founded the Institute for Historical Studies. Croce
was a literary and historical scholar who joined his great interest in these
fields to philosophy. His best-known work in the Englishspeaking world is
Aesthetic as Science of Expression and General Linguistic 1902. This was the
first part of his “Philosophy of Spirit”; the second was his Logic 1905, the
third his theory of the Practical 1909, and the fourth his Historiography 1917.
Croce was influenced by Hegel and the Hegelian aesthetician Francesco De
Sanctis 181783 and by Vico’s conceptions of knowledge, history, and society. He
wrote The Philosophy of Giambattista Vico 1911 and a famous commentary on
Hegel, What Is Living and What Is critical theory Croce, Benedetto 195 195 Dead in the Philosophy of Hegel 1907, in
which he advanced his conception of the “dialectic of distincts” as more
fundamental than the Hegelian dialectic of opposites. Croce held that
philosophy always springs from the occasion, a view perhaps rooted in his
concrete studies of history. He accepted the general Hegelian identification of
philosophy with the history of philosophy. His philosophy originates from his
conception of aesthetics. Central to his aesthetics is his view of intuition,
which evolved through various stages during his career. He regards aesthetic
experience as a primitive type of cognition. Intuition involves an awareness of
a particular image, which constitutes a non-conceptual form of knowledge. Art
is the expression of emotion but not simply for its own sake. The expression of
emotion can produce cognitive awareness in the sense that the particular
intuited as an image can have a cosmic aspect, so that in it the universal
human spirit is perceived. Such perception is present especially in the
masterpieces of world literature. Croce’s conception of aesthetic has
connections with Kant’s “intuition” Anschauung and to an extent with Vico’s
conception of a primordial form of thought based in imagination fantasia.
Croce’s philosophical idealism includes fully developed conceptions of logic,
science, law, history, politics, and ethics. His influence to date has been
largely in the field of aesthetics and in historicist conceptions of knowledge
and culture. His revival of Vico has inspired a whole school of Vico
scholarship. Croce’s conception of a “Philosophy of Spirit” showed it was
possible to develop a post-Hegelian philosophy that, with Hegel, takes “the
true to be the whole” but which does not simply imitate Hegel.
crucial experiment, a means
of deciding between rival theories that, providing parallel explanations of
large classes of phenomena, come to be placed at issue by a single fact. For
example, the Newtonian emission theory predicts that light travels faster in
water than in air; according to the wave theory, light travels slower in water
than in air. Dominique François Arago proposed a crucial experiment comparing
the respective velocities. Léon Foucault then devised an apparatus to measure
the speed of light in various media and found a lower velocity in water than in
air. Arago and Foucault concluded for the wave theory, believing that the
experiment refuted the emission theory. Other examples include Galileo’s
discovery of the phases of Venus Ptolemaic versus Copernican astronomy, Pascal’s
Puy-de-Dôme experiment with the barometer vacuists versus plenists, Fresnel’s
prediction of a spot of light in circular shadows particle versus wave optics,
and Eddington’s measurement of the gravitational bending of light rays during a
solar eclipse Newtonian versus Einsteinian gravitation. At issue in crucial
experiments is usually a novel prediction. The notion seems to derive from
Francis Bacon, whose New Organon 1620 discusses the “Instance of the Fingerpost
Instantia later experimentum crucis,” a term borrowed from the post set up
at crossroads to indicate several directions. Crucial experiments were
emphasized in early nineteenth-century scientific methodology e.g., in John F. Herschel’s A Preliminary
Discourse on the Study of Natural Philosophy 1830. Duhem argued that crucial
experiments resemble false dilemmas: hypotheses in physics do not come in
pairs, so that crucial experiments cannot transform one of the two into a
demonstrated truth. Discussing Foucault’s experiment, Duhem asks whether we
dare assert that no other hypothesis is imaginable and suggests that instead of
light being either a simple particle or wave, light might be something else,
perhaps a disturbance propagated within a dielectric medium, as theorized by
Maxwell. In the twentieth century, crucial experiments and novel predictions
figured prominently in the work of Imre Lakatos 192274. Agreeing that crucial
experiments are unable to overthrow theories, Lakatos accepted them as
retroactive indications of the fertility or progress of research programs.
Crusius, Christian August
171575, German philosopher, theologian, and a devout Lutheran pastor who
believed that religion was endangered by the rationalist views especially of
Wolff. He devoted his considerable philosophical powers to working out acute
and often deep criticisms of Wolff and developing a comprehensive alternative
to the Wolffian system. His main philosophical works were published in the
1740s. In his understanding of epistemology and logic Crusius broke with many
of the assumptions that allowed Wolff to argue from how we think of things to
how things are. For instance, Crusius tried to show that the necessity in
causal connection is not the same as logical necessity. He rejected the
Leibnizian view that this world is probably the best possible world, and he
criticrucial experiment Crusius, Christian August 196 196 cized the Wolffian view of freedom of
the will as merely a concealed spiritual mechanism. His ethics stressed our
dependence on God and his commands, as did the natural law theory of Pufendorf,
but he developed the view in some strikingly original ways. Rejecting
voluntarism, Crusius held that God’s commands take the form of innate
principles of the will not the understanding. Everyone alike can know what they
are, so contra Wolff there is no need for moral experts. And they carry their
own motivational force with them, so there is no need for external sanctions.
We have obligations of prudence to do what will forward our own ends; but true
obligation, the obligation of virtue, arises only when we act simply to comply
with God’s law, regardless of any ends of our own. In this distinction between
two kinds of obligation, as in many of his other views, Crusius plainly
anticipated much that Kant came to think. Kant when young read and admired his
work, and it is mainly for this reason that Crusius is now remembered.
Cudworth, Damaris, Lady
Masham 1659 1708, English philosopher and author of two treatises on religion,
A Discourse Concerning the Love of God 1690 and Occasional Thoughts in
Reference to a Virtuous Christian Life 1705. The first argues against the views
of the English Malebranchian, John Norris; the second, ostensibly about the
importance of education for women, argues for the need to establish natural
religion on rational principles and explores the place of revealed religion
within a rational framework. Cudworth’s reputation is founded on her long
friendship with John Locke. Her correspondence with him is almost entirely
personal; she also entered into a brief but philosophically interesting
exchange of letters with Leibniz.
Cumberland, Richard
16311718, English philosopher and bishop. He wrote a Latin Treatise of the Laws
of Nature 1672, translated twice into English and once into French. Admiring
Grotius, Cumberland hoped to refute Hobbes in the interests of defending
Christian morality and religion. He refused to appeal to innate ideas and a
priori arguments because he thought Hobbes must be attacked on his own ground.
Hence he offered a reductive and naturalistic account of natural law. The one
basic moral law of nature is that the pursuit of the good of all rational
beings is the best path to the agent’s own good. This is true because God made
nature so that actions aiding others are followed by beneficial consequences to
the agent, while those harmful to others harm the agent. Since the natural
consequences of actions provide sanctions that, once we know them, will make us
act for the good of others, we can conclude that there is a divine law by which
we are obligated to act for the common good. And all the other laws of nature
follow from the basic law. Cumberland refused to discuss free will, thereby
suggesting a view of human action as fully determined by natural causes. If on
his theory it is a blessing that God made nature including humans to work as it
does, the religious reader must wonder if there is any role left for God
concerning morality. Cumberland is generally viewed as a major forerunner of
utilitarianism.
curve-fitting problem,
the problem of making predictions from past observations by fitting curves to
the data. Curve fitting has two steps: first, select a family of curves; then,
find the bestfitting curve by some statistical criterion such as the method of
least squares e.g., choose the curve that has the least sum of squared
deviations between the curve and data. The method was first proposed by Adrian
Marie Legendre 17521833 and Carl Friedrich Gauss 1777 1855 in the early
nineteenth century as a way of inferring planetary trajectories from noisy
data. More generally, curve fitting may be used to construct low-level
empirical generalizations. For example, suppose that the ideal gas law, P %
nkT, is chosen as the form of the law governing the dependence of the pressure
P on the equilibrium temperature T of a fixed volume of gas, where n is the
molecular number per unit volume and k is Boltzmann’s constant a universal
constant equal to 1.3804 $ 10†16 erg°C†1. When the parameter nk is adjustable,
the law specifies a family of curves one
for each numerCudworth, Damaris curve-fitting problem 197 197 ical value of the parameter. Curve
fitting may be used to determine the best-fitting member of the family, thereby
effecting a measurement of the theoretical parameter, nk. The philosophically
vexing problem is how to justify the initial choice of the form of the law. On
the one hand, one might choose a very large, complex family of curves, which
would ensure excellent fit with any data set. The problem with this option is
that the best-fitting curve may overfit the data. If too much attention is paid
to the random elements of the data, then the predictively useful trends and
regularities will be missed. If it looks too good to be true, it probably is.
On the other hand, simpler families run a greater risk of making grossly false
assumptions about the true form of the law. Intuitively, the solution is to
choose a simplefamily of curves that maintains a reasonable degree of fit. The
simplicity of a family of curves is measured by the paucity of parameters. The
problem is to say how and why such a trade-off between simplicity and goodness
of fit should be made. When a theory can accommodate recalcitrant data only by
the ad hoc i.e., improperly motivated addition of new terms and parameters,
students of science have long felt that the subsequent increase in the degree
of fit should not count in the theory’s favor, and such additions are sometimes
called ad hoc hypotheses. The best-known example of this sort of ad hoc
hypothesizing is the addition of epicycles upon epicycles in the planetary
astronomies of Ptolemy and Copernicus. This is an example in which a gain in
fit need not compensate for the loss of simplicity. Contemporary philosophers
sometimes formulate the curve-fitting problem differently. They often assume
that there is no noise in the data, and speak of the problem of choosing among
different curves that fit the data exactly. Then the problem is to choose the
simplest curve from among all those curves that pass through every data point.
The problem is that there is no universally accepted way of defining the
simplicity of single curves. No matter how the problem is formulated, it is
widely agreed that simplicity should play some role in theory choice.
Rationalists have championed the curve-fitting problem as exemplifying the
underdetermination of theory from data and the need to make a priori
assumptions about the simplicity of nature. Those philosophers who think that
we have no such a priori knowledge still need to account for the relevance of
simplicity to science. Whewell described curve fitting as the colligation of
facts in the quantitative sciences, and the agreement in the measured
parameters coefficients obtained by different colligations of facts as the
consilience of inductions. Different colligations of facts say on the same gas
at different volume or for other gases may yield good agreement among
independently measured values of parameters like the molecular density of the
gas and Boltzmann’s constant. By identifying different parameters found to
agree, we constrain the form of the law without appealing to a priori knowledge
good news for empiricism. But the accompanying increase in unification also
worsens the overall degree of fit. Thus, there is also the problem of how and
why we should trade off unification with total degree of fit. Statisticians
often refer to a family of hypotheses as a model. A rapidly growing literature
in statistics on model selection has not yet produced any universally accepted
formula for trading off simplicity with degree of fit. However, there is wide
agreement among statisticians that the paucity of parameters is the appropriate
way of measuring simplicity.
.
cut-elimination theorem,
a theorem stating that a certain type of inference rule including a rule that
corresponds to modus ponens is not needed in classical logic. The idea was
anticipated by J. Herbrand; the theorem was proved by G. Gentzen and
generalized by S. Kleene. Gentzen formulated a sequent calculus i.e., a deductive system with rules for
statements about derivability. It includes a rule that we here express as ‘From
C Y D,M and M,C Y D, infer C Y D’ or ‘Given that C yields D or M, and that C
plus M yields D, we may infer that C yields D’. Cusa cut-elimination theorem
198 198 This is called the cut rule because it
cuts out the middle formula M. Gentzen showed that his sequent calculus is an
adequate formalization of the predicate logic, and that the cut rule can be
eliminated; anything provable with it can be proved without it. One important
consequence of this is that, if a formula F is provable, then there is a proof
of F that consists solely of subformulas of F. This fact simplifies the study
of provability. Gentzen’s methodology applies directly to classical logic but can
be adapted to many nonclassical logics, including some intuitionistic logics.
It has led to some important theorems about consistency, and has illuminated
the role of auxiliary assumptions in the derivation of consequences from a
theory.
cybernetics coined by
Norbert Wiener in 1947 from Grecian kubernetes, ‘helmsman’, the study of the
communication and manipulation of information in service of the control and
guidance of biological, physical, or chemical energy systems. Historically,
cybernetics has been intertwined with mathematical theories of information
communication and computation. To describe the cybernetic properties of systems
or processes requires ways to describe and measure information reduce
uncertainty about events within the system and its environment. Feedback and
feedforward, the basic ingredients of cybernetic processes, involve
information as what is fed forward or
backward and are basic to processes such
as homeostasis in biological systems, automation in industry, and guidance systems.
Of course, their most comprehensive application is to the purposive behavior
thought of cognitively goal-directed systems such as ourselves. Feedback occurs
in closed-loop, as opposed to open-loop, systems. Actually, ‘open-loop’ is a
misnomer involving no loop, but it has become entrenched. The standard example
of an openloop system is that of placing a heater with constant output in a
closed room and leaving it switched on. Room temperature may accidentally
reach, but may also dramatically exceed, the temperature desired by the
occupants. Such a heating system has no means of controlling itself to adapt to
required conditions. In contrast, the standard closed-loop system incorporates
a feedback component. At the heart of cybernetics is the concept of control. A
controlled process is one in which an end state that is reached depends
essentially on the behavior of the controlling system and not merely on its
external environment. That is, control involves partial independence for the
system. A control system may be pictured as having both an inner and outer
environment. The inner environment consists of the internal events that make up
the system; the outer environment consists of events that causally impinge on
the system, threatening disruption and loss of system integrity and stability.
For a system to maintain its independence and identity in the face of
fluctuations in its external environment, it must be able to detect information
about those changes in the external environment. Information must pass through
the interface between inner and outer environments, and the system must be able
to compensate for fluctuations of the outer environment by adjusting its own
inner environmental variables. Otherwise, disturbances in the outer environment
will overcome the system bringing its
inner states into equilibrium with the outer states, thereby losing its
identity as a distinct, independent system. This is nowhere more certain than
with the homeostatic systems of the body for temperature or blood sugar levels.
Control in the attainment of goals is accomplished by minimizing error.
Negative feedback, or information about error, is the difference between
activity a system actually performs output and that activity which is its goal
to perform input. The standard example of control incorporating negative
feedback is the thermostatically controlled heating system. The actual room
temperature system output carries information to the thermostat that can be
compared via goal-state comparator to the desired temperature for the room
input as embodied in the set-point on the thermostat; a correction can then be
made to minimize the difference error
the furnace turns on or off. Positive feedback tends to amplify the
value of the output of a system or of a system disturbance by adding the value
of the output to the system input quantity. Thus, the system accentuates
disturbances and, if unchecked, will eventually pass the brink of instability.
Suppose that as room temperature rises it causes the thermostatic set-point to rise
in direct proportion to the rise in temperature. This would cause the furnace
to continue to output heat possibly with disastrous consequences. Many
biological maladies have just this characteristic. For example, severe loss of
blood causes inability of the heart to pump effectively, which causes loss of
arterial pressure, which, in turn, causes reduced flow of blood to the heart,
reducing pumping efficiency. cybernetics cybernetics 199 199 Cognitively goal-directed systems are
also cybernetic systems. Purposive attainment of a goal by a goal-directed
system must have at least: 1 an internal representation of the goal state of
the system a detector for whether the desired state is actual; 2 a feedback
loop by which information about the present state of the system can be compared
with the goal state as internally represented and by means of which an error
correction can be made to minimize any difference; and 3 a causal dependency of
system output upon the error-correction process of condition 2 to distinguish
goal success from fortuitous goal satisfaction.
Cynics, a classical Grecian
philosophical school characterized by asceticism and emphasis on the
sufficiency of virtue for happiness eudaimonia, boldness in speech, and
shamelessness in action. The Cynics were strongly influenced by Socrates and
were themselves an important influence on Stoic ethics. An ancient tradition
links the Cynics to Antisthenes c.445c.360 B.C., an Athenian. He fought bravely
in the battle of Tanagra and claimed that he would not have been so courageous
if he had been born of two Athenians instead of an Athenian and a Thracian
slave. He studied with Gorgias, but later became a close companion of Socrates
and was present at Socrates’ death. Antisthenes was proudest of his wealth,
although he had no money, because he was satisfied with what he had and he
could live in whatever circumstances he found himself. Here he follows Socrates
in three respects. First, Socrates himself lived with a disregard for pleasure
and pain e.g., walking barefoot in snow.
Second, Socrates thinks that in every circumstance a virtuous person is better
off than a nonvirtuous one; Antisthenes anticipates the Stoic development of
this to the view that virtue is sufficient for happiness, because the virtuous person
uses properly whatever is present. Third, both Socrates and Antisthenes stress
that the soul is more important than the body, and neglect the body for the
soul. Unlike the later Cynics, however, both Socrates and Antisthenes do accept
pleasure when it is available. Antisthenes also does not focus exclusively on
ethics; he wrote on other topics, including logic. He supposedly told Plato
that he could see a horse but not horseness, to which Plato replied that he had
not acquired the means to see horseness. Diogenes of Sinope c.400c.325 B.C.
continued the emphasis on self-sufficiency and on the soul, but took the
disregard for pleasure to asceticism. According to one story, Plato called
Diogenes “Socrates gone mad.” He came to Athens after being exiled from Sinope,
perhaps because the coinage was defaced, either by himself or by others, under
his father’s direction. He took ‘deface the coinage!’ as a motto, meaning that
the current standards were corrupt and should be marked as corrupt by being
defaced; his refusal to live by them was his defacing them. For example, he
lived in a wine cask, ate whatever scraps he came across, and wrote approvingly
of cannibalism and incest. One story reports that he carried a lighted lamp in
broad daylight looking for an honest human, probably intending to suggest that
the people he did see were so corrupted that they were no longer really people.
He apparently wanted to replace the debased standards of custom with the
genuine standards of nature but nature
in the sense of what was minimally required for human life, which an individual
human could achieve, without society. Because of this, he was called a Cynic,
from the Grecian word kuon dog, because he was as shameless as a dog. Diogenes’
most famous successor was Crates fl. c.328325 B.C.. He was a Boeotian, from
Thebes, and renounced his wealth to become a Cynic. He seems to have been more
pleasant than Diogenes; according to some reports, every Athenian house was
open to him, and he was even regarded by them as a household god. Perhaps the
most famous incident involving Crates is his marriage to Hipparchia, who took
up the Cynic way of life despite her family’s opposition and insisted that
educating herself was preferable to working a loom. Like Diogenes, Crates
emphasized that happiness is self-sufficiency, and claimed that asceticism is
required for self-sufficiency; e.g., he advises us not to prefer oysters to
lentils. He argues that no one is happy if happiness is measured by the balance
of pleasure and pain, since in each period of our lives there is more pain than
pleasure. Cynicism continued to be active through the third century B.C., and
returned to prominence in the second century A.D. after an apparent
decline.
Cyrenaics, a classical Grecian
philosophical school that began shortly after Socrates and lasted for several
centuries, noted especially for hedonism. Ancient writers trace the Cyrenaics
back to ArisCynics Cyrenaics 200 200
tippus of Cyrene fifth-fourth century B.C., an associate of Socrates.
Aristippus came to Athens because of Socrates’ fame and later greatly enjoyed
the luxury of court life in Sicily. Some people ascribe the founding of the
school to his grandchild Aristippus, because of an ancient report that the
elder Aristippus said nothing clear about the human end. The Cyrenaics include
Aristippus’s child Arete, her child Aristippus taught by Arete, Hegesius,
Anniceris, and Theodorus. The school seems to have been superseded by the
Epicureans. No Cyrenaic writings survive, and the reports we do have are
sketchy. The Cyrenaics avoid mathematics and natural philosophy, preferring
ethics because of its utility. According to them, not only will studying nature
not make us virtuous, it also won’t make us stronger or richer. Some reports
claim that they also avoid logic and epistemology. But this is not true of all
the Cyrenaics: according to other reports, they think logic and epistemology
are useful, consider arguments and also causes as topics to be covered in
ethics, and have an epistemology. Their epistemology is skeptical. We can know
only how we are affected; we can know, e.g., that we are whitening, but not
that whatever is causing this sensation is itself white. This differs from
Protagoras’s theory; unlike Protagoras the Cyrenaics draw no inferences about
the things that affect us, claiming only that external things have a nature
that we cannot know. But, like Protagoras, the Cyrenaics base their theory on
the problem of conflicting appearances. Given their epistemology, if humans
ought to aim at something that is not a way of being affected i.e., something
that is immediately perceived according to them, we can never know anything
about it. Unsurprisingly, then, they claim that the end is a way of being
affected; in particular, they are hedonists. The end of good actions is
particular pleasures smooth changes, and the end of bad actions is particular
pains rough changes. There is also an intermediate class, which aims at neither
pleasure nor pain. Mere absence of pain is in this intermediate class, since
the absence of pain may be merely a static state. Pleasure for Aristippus seems
to be the sensation of pleasure, not including related psychic states. We
should aim at pleasure although not everyone does, as is clear from our
naturally seeking it as children, before we consciously choose to. Happiness,
which is the sum of the particular pleasures someone experiences, is
choiceworthy only for the particular pleasures that constitute it, while
particular pleasures are choiceworthy for themselves. Cyrenaics, then, are not
concerned with maximizing total pleasure over a lifetime, but only with
particular pleasures, and so they should not choose to give up particular
pleasures on the chance of increasing the total. Later Cyrenaics diverge in
important respects from the original Cyrenaic hedonism, perhaps in response to
the development of Epicurus’s views. Hegesias claims that happiness is
impossible because of the pains associated with the body, and so thinks of
happiness as total pleasure minus total pain. He emphasizes that wise people
act for themselves, and denies that people actually act for someone else.
Anniceris, on the other hand, claims that wise people are happy even if they
have few pleasures, and so seems to think of happiness as the sum of pleasures,
and not as the excess of pleasures over pains. Anniceris also begins
considering psychic pleasures: he insists that friends should be valued not
only for their utility, but also for our feelings toward them. We should even
accept losing pleasure because of a friend, even though pleasure is the end.
Theodorus goes a step beyond Anniceris. He claims that the end of good actions
is joy and that of bad actions is grief. Surprisingly, he denies that
friendship is reasonable, since fools have friends only for utility and wise
people need no friends. He even regards pleasure as intermediate between
practical wisdom and its opposite. This seems to involve regarding happiness as
the end, not particular pleasures, and may involve losing particular pleasures
for long-term happiness.
Czolbe, Heinrich 181973,
German philosopher. He was born in Danzig and trained in theology and medicine.
His main works are Neue Darstellung des Sensualismus “New Exposition of
Sensualism,” 1855, Entstehung des Selbstbewusstseins “Origin of
Self-Consciousness,” 1856, Die Grenzen und der Ursprung der menschlichen
Erkenntnis “The Limits and Origin of Human Knowledge,” 1865, and a posthumously
published study, Grundzüge der extensionalen Erkenntnistheorie 1875. Czolbe
proposed a sensualistic theory of knowledge: knowledge is a copy of the actual,
and spatial extension is ascribed even to ideas. Space is the support of all
attributes. His later work defended a non-reductive materialism. Czolbe made
the rejection of the supersensuous a central principle and defended a radical
“senCzolbe, Heinrich Czolbe, Heinrich 201
201 sationalism.” Despite this, he did not present a dogmatic
materialism, but cast his philosophy in hypothetical form. In his study of the
origin of self-consciousness Czolbe held that dissatisfaction with the actual
world generates supersensuous ideas and branded this attitude as “immoral.” He
excluded supernatural phenomena on the basis not of physiological or scientific
studies but of a “moral feeling of duty towards the natural world-order and
contentment with it.” The same valuation led him to postulate the eternality of
terrestrial life. Nietzsche was familiar with Czolbe’s works and incorporated
some of his themes into his philosophy.
Czolbe, Heinrich Czolbe,
Heinrich 202 202 d’Ailly, Pierre
13501420, French Ockhamist philosopher, prelate, and writer. Educated at the
Collège de Navarre, he was promoted to doctor in the Sorbonne in 1380,
appointed chancellor of Paris in 1389,
consecrated bishop in 1395, and made a cardinal in 1411. He was influenced by
John of Mirecourt’s nominalism. He taught Gerson. At the Council of Constance
141418, which condemned Huss’s teachings, d’Ailly upheld the superiority of the
council over the pope conciliarism. The relation of astrology to history and
theology figures among his primary interests. His 1414 Tractatus de Concordia
astronomicae predicted the 1789 French Revolution. He composed a De anima, a
commentary on Boethius’s Consolation of Philosophy, and another on Peter
Lombard’s Sentences. His early logical work, Concepts and Insolubles c.1472,
was particularly influential. In epistemology, d’Ailly contradistinguished
“natural light” indubitable knowledge from reason relative knowledge, and
emphasized thereafter the uncertainty of experimental knowledge and the mere
probability of the classical “proofs” of God’s existence. His doctrine of God
differentiates God’s absolute power potentia absoluta from God’s ordained power
on earth potentia ordinata. His theology anticipated fideism Deum esse sola
fide tenetur, his ethics the spirit of Protestantism, and his sacramentology
Lutheranism. J.-L.S. d’Alembert, Jean Le Rond 171783, French mathematician,
philosopher, and Encyclopedist. According to Grimm, d’Alembert was the prime
luminary of the philosophic party. An abandoned, illegitimate child, he
nonetheless received an outstanding education at the Jansenist Collège des
Quatre-Nations in Paris. He read law for a while, tried medicine, and settled
on mathematics. In 1743, he published an acclaimed Treatise of Dynamics.
Subsequently, he joined the Paris Academy of Sciences and contributed decisive
works on mathematics and physics. In 1754, he was elected to the French
Academy, of which he later became permanent secretary. In association with Diderot,
he launched the Encyclopedia, for which he wrote the epoch-making Discours
préliminaire 1751 and numerous entries on science. Unwilling to compromise with
the censorship, he resigned as coeditor in 1758. In the Discours préliminaire,
d’Alembert specified the divisions of the philosophical discourse on man:
pneumatology, logic, and ethics. Contrary to Christian philosophies, he limited
pneumatology to the investigation of the human soul. Prefiguring positivism,
his Essay on the Elements of Philosophy 1759 defines philosophy as a
comparative examination of physical phenomena. Influenced by Bacon, Locke, and
Newton, d’Alembert’s epistemology associates Cartesian psychology with the
sensory origin of ideas. Though assuming the universe to be rationally ordered,
he discarded metaphysical questions as inconclusive. The substance, or the
essence, of soul and matter, is unknowable. Agnosticism ineluctably arises from
his empirically based naturalism. D’Alembert is prominently featured in
D’Alembert’s Dream 1769, Diderot’s dialogical apology for materialism.
Damascius c.462c.550, Grecian
Neoplatonist philosopher, last head of the Athenian Academy before its closure
by Justinian in A.D. 529. Born probably in Damascus, he studied first in
Alexandria, and then moved to Athens shortly before Proclus’s death in 485. He
returned to Alexandria, where he attended the lectures of Ammonius, but came
back again to Athens in around 515, to assume the headship of the Academy.
After the closure, he retired briefly with some other philosophers, including
Simplicius, to Persia, but left after about a year, probably for Syria, where
he died. He composed many works, including a life of his master Isidorus, which
survives in truncated form; commentaries on Aristotle’s Categories, On the
Heavens, and Meteorologics I all lost; commentaries on Plato’s Alcibiades,
Phaedo, Philebus, and Parmenides, which survive; and a surviving treatise On
First Principles. His philosophical system is a further elaboration of the
scholastic Neoplatonism of Proclus, exhibiting a great proliferation of
metaphysical entities.
Danto, Arthur Coleman
b.1924, American philosopher of art and art history who has also contributed to
the philosophies of history, action, knowledge, science, and metaphilosophy. Among
his influential studies in the history of philosophy are books on Nietzsche,
Sartre, and Indian thought. Danto arrives at his philosophy of art through his
“method of indiscernibles,” which has greatly influenced contemporary
philosophical aesthetics. According to his metaphilosophy, genuine
philosophical questions arise when there is a theoretical need to differentiate
two things that are perceptually indiscernible
such as prudential actions versus moral actions Kant, causal chains
versus constant conjunctions Hume, and perfect dreams versus reality Descartes.
Applying the method to the philosophy of art, Danto asks what distinguishes an
artwork, such as Warhol’s Brillo Box, from its perceptually indiscernible,
real-world counterparts, such as Brillo boxes by Proctor and Gamble. His
answer his partial definition of
art is that x is a work of art only if 1
x is about something and 2 x embodies its meaning i.e., discovers a mode of
presentation intended to be appropriate to whatever subject x is about. These
two necessary conditions, Danto claims, enable us to distinguish between
artworks and real things between
Warhol’s Brillo Box and Proctor and Gamble’s. However, critics have pointed out
that these conditions fail, since real Brillo boxes are about something Brillo
about which they embody or convey meanings through their mode of presentation
viz., that Brillo is clean, fresh, and dynamic. Moreover, this is not an
isolated example. Danto’s theory of art confronts systematic difficulties in
differentiating real cultural artifacts, such as industrial packages, from
artworks proper. In addition to his philosophy of art, Danto proposes a
philosophy of art history. Like Hegel, Danto maintains that art history as a developmental, progressive process has ended. Danto believes that modern art has
been primarily reflexive i.e., about itself; it has attempted to use its own
forms and strategies to disclose the essential nature of art. Cubism and
abstract expressionism, for example, exhibit saliently the two-dimensional
nature of painting. With each experiment, modern art has gotten closer to
disclosing its own essence. But, Danto argues, with works such as Warhol’s
Brillo Box, artists have taken the philosophical project of self-definition as
far as they can, since once an artist like Warhol has shown that artworks can
be perceptually indiscernible from “real things” and, therefore, can look like
anything, there is nothing further that the artist qua artist can show through
the medium of appearances about the nature of art. The task of defining art
must be reassigned to philosophers to be treated discursively, and art
history as the developmental,
progressive narrative of self-definition
ends. Since that turn of events was putatively precipitated by Warhol in
the 1960s, Danto calls the present period of art making “post-historical.” As
an art critic for The Nation, he has been chronicling its vicissitudes for a
decade and a half. Some dissenters, nevertheless, have been unhappy with
Danto’s claim that art history has ended because, they maintain, he has failed
to demonstrate that the only prospects for a developmental, progressive history
of art reside in the project of the self-definition of art.
Darwinism, the view that
biological species evolve primarily by means of chance variation and natural
selection. Although several important scientists prior to Charles Darwin 180982
had suggested that species evolve and had provided mechanisms for that
evolution, Darwin was the first to set out his mechanism in sufficient detail
and provide adequate empirical grounding. Even though Darwin preferred to talk
about descent with modification, the term that rapidly came to characterize his
theory was evolution. According to Darwin, organisms vary with respect to their
characteristics. In a litter of puppies, some will be bigger, some will have
longer hair, some will be more resistant to disease, etc. Darwin termed these
variations chance, not because he thought that they were in any sense
“uncaused,” but to reject any general correlation between the variations that
an organism might need and those it gets, as Lamarck had proposed. Instead,
successive generations of organisms become adapted to their environments in a
more roundabout way. Variations occur in all directions. The organisms that
happen to possess the characteristics necessary to survive and reproduce
proliferate. Those that do not either die or leave fewer offspring. Before
Darwin, an adaptation was any trait that fits an organism to its environment.
After Darwin, the term came to be limited to just those useful traits that
arose through natural selection. For example, the sutures in the skulls of
mammals make parturition easier, but they are not adaptations in an
evolutionary sense because Danto, Arthur Coleman Darwinism 204 204 they arose in ancestors that did not
give birth to live young, as is indicated by these same sutures appearing in
the skulls of egg-laying birds. Because organisms are integrated systems,
Darwin thought that adaptations had to arise through the accumulation of
numerous, small variations. As a result, evolution is gradual. Darwin himself
was unsure about how progressive biological evolution is. Organisms certainly
become better adapted to their environments through successive generations, but
as fast as organisms adapt to their environments, their environments are likely
to change. Thus, Darwinian evolution may be goal-directed, but different
species pursue different goals, and these goals keep changing. Because heredity
was so important to his theory of evolution, Darwin supplemented it with a
theory of heredity pangenesis. According
to this theory, the cells throughout the body of an organism produce numerous
tiny gemmules that find their way to the reproductive organs of the organism to
be transmitted in reproduction. An offspring receives variable numbers of
gemmules from each of its parents for each of its characteristics. For
instance, the male parent might contribute 214 gemmules for length of hair to
one offspring, 121 to another, etc., while the female parent might contribute
54 gemmules for length of hair to the first offspring and 89 to the second. As
a result, characters tend to blend. Darwin even thought that gemmules
themselves might merge, but he did not think that the merging of gemmules was
an important factor in the blending of characters. Numerous objections were
raised to Darwin’s theory in his day, and one of the most telling stemmed from
his adopting a blending theory of inheritance. As fast as natural selection
biases evolution in a particular direction, blending inheritance neutralizes
its effects. Darwin’s opponents argued that each species had its own range of
variation. Natural selection might bias the organisms belonging to a species in
a particular direction, but as a species approached its limits of variation,
additional change would become more difficult. Some special mechanism was
needed to leap over the deep, though possibly narrow, chasms that separate
species. Because a belief in biological evolution became widespread within a
decade or so after the publication of Darwin’s Origin of Species in 1859, the
tendency is to think that it was Darwin’s view of evolution that became
popular. Nothing could be further from the truth. Darwin’s contemporaries found
his theory too materialistic and haphazard because no supernatural or
teleological force influenced evolutionary development. Darwin’s contemporaries
were willing to accept evolution, but not the sort advocated by Darwin.
Although Darwin viewed the evolution of species on the model of individual
development, he did not think that it was directed by some internal force or
induced in a Lamarckian fashion by the environment. Most Darwinians adopted
just such a position. They also argued that species arise in the space of a
single generation so that the boundaries between species remained as discrete
as the creationists had maintained. Ideal morphologists even eliminated any
genuine temporal dimension to evolution. Instead they viewed the evolution of
species in the same atemporal way that mathematicians view the transformation
of an ellipse into a circle. The revolution that Darwin instigated was in most
respects non-Darwinian. By the turn of the century, Darwinism had gone into a
decided eclipse. Darwin himself remained fairly open with respect to the
mechanisms of evolution. For example, he was willing to accept a minor role for
Lamarckian forms of inheritance, and he acknowledged that on occasion a new
species might arise quite rapidly on the model of the Ancon sheep. Several of
his followers were less flexible, rejecting all forms of Lamarckian inheritance
and insisting that evolutionary change is always gradual. Eventually Darwinism
became identified with the views of these neo-Darwinians. Thus, when Mendelian
genetics burst on the scene at the turn of the century, opponents of Darwinism
interpreted this new particulate theory of inheritance as being incompatible
with Darwin’s blending theory. The difference between Darwin’s theory of
pangenesis and Mendelian genetics, however, did not concern the existence of
hereditary particles. Gemmules were as particulate as genes. The difference lay
in numbers. According to early Mendelians, each character is controlled by a
single pair of genes. Instead of receiving a variable number of gemmules from
each parent for each character, each offspring gets a single gene from each
parent, and these genes do not in any sense blend with each other. Blue eyes
remain as blue as ever from generation to generation, even when the gene for
blue eyes resides opposite the gene for brown eyes. As the nature of heredity
was gradually worked out, biologists began to realize that a Darwinian view of
evolution could be combined with Mendelian genetics. Initially, the founders of
this later stage in the development of neoDarwinism exhibited considerable
variation in Darwinism Darwinism 205
205 their beliefs about the evolutionary process, but as they strove to
produce a single, synthetic theory, they tended to become more Darwinian than
Darwin had been. Although they acknowledged that other factors, such as the
effects of small numbers, might influence evolution, they emphasized that
natural selection is the sole directive force in evolution. It alone could
explain the complex adaptations exhibited by organisms. New species might arise
through the isolation of a few founder organisms, but from a populational
perspective, evolution was still gradual. New species do not arise in the space
of a single generation by means of “hopeful monsters” or any other developmental
means. Nor was evolution in any sense directional or progressive. Certain
lineages might become more complex for a while, but at this same time, others
would become simpler. Because biological evolution is so opportunistic, the
tree of life is highly irregular. But the united front presented by the
neo-Darwinians was in part an illusion. Differences of opinion persisted, for
instance over how heterogeneous species should be. No sooner did neo-Darwinism
become the dominant view among evolutionary biologists than voices of dissent
were raised. Currently, almost every aspect of the neo-Darwinian paradigm is
being challenged. No one proposes to reject naturalism, but those who view
themselves as opponents of neo-Darwinism urge more important roles for factors
treated as only minor by the neo-Darwinians. For example, neoDarwinians view
selection as being extremely sharp-sighted. Any inferior organism, no matter
how slightly inferior, is sure to be eliminated. Nearly all variations are
deleterious. Currently evolutionists, even those who consider themselves
Darwinians, acknowledge that a high percentage of changes at the molecular
level may be neutral with respect to survival or reproduction. On current
estimates, over 95 percent of an organism’s genes may have no function at all.
Disagreement also exists about the level of organization at which selection can
operate. Some evolutionary biologists insist that selection occurs primarily at
the level of single genes, while others think that it can have effects at
higher levels of organization, certainly at the organismic level, possibly at
the level of entire species. Some biologists emphasize the effects of
developmental constraints on the evolutionary process, while others have
discovered unexpected mechanisms such as molecular drive. How much of this
conceptual variation will become incorporated into Darwinism remains to be
seen.
Davidson, Donald b.1917,
American metaphysician and philosopher of mind and language. His views on the
relationship between our conceptions of ourselves as persons and as complex
physical objects have had an enormous impact on contemporary philosophy.
Davidson regards the mindbody problem as the problem of the relation between
mental and physical events; his discussions of explanation assume that the
entities explained are events; causation is a relation between events; and
action is a species of events, so that events are the very subject matter of
action theory. His central claim concerning events is that they are concrete
particulars unrepeatable entities
located in space and time. He does not take for granted that events exist, but
argues for their existence and for specific claims as to their nature. In “The
Individuation of Events” in Essays on Actions and Events, 1980, Davidson argues
that a satisfactory theory of action must recognize that we talk of the same
action under different descriptions. We must therefore assume the existence of
actions. His strongest argument for the existence of events derives from his
most original contribution to metaphysics, the semantic method of truth Essays
on Actions and Events, pp. 10580; Essays on Truth and Interpretation, 1984, pp.
199214. The argument is based on a distinctive trait of the English language
one not obviously shared by signal systems in lower animals, namely, its
productivity of combinations. We learn modes of composition as well as words
and are thus prepared to produce and respond to complex expressions never
before encountered. Davidson argues, from such considerations, that our very
understanding of English requires assuming the existence of events. To
understand Davidson’s rather complicated views about the relationships between
mind and body, consider the following claims: 1 The mental and the physical are
distinct. 2 The mental and the physical causally interact. 3 The physical is
causally closed. Darwinism, social Davidson, Donald 206 206 1 says that no mental event is a
physical event; 2, that some mental events cause physical events and vice
versa; and 3, that all the causes of physical events are physical events. If
mental events are distinct from physical events and sometimes cause them, then
the physical is not causally closed. The dilemma posed by the plausibility of
each of these claims and by their apparent incompatibility just is the
traditional mind body problem. Davidson’s resolution consists of three theses:
4 There are no strict psychological or psychophysical laws; in fact, all strict
laws are expressible in purely physical vocabulary. 5 Mental events causally
interact with physical events. 6 Event c causes event e only if some strict
causal law subsumes c and e. It is commonly held that a property expressed by M
is reducible to a property expressed by P where M and P are not logically
connected only if some exceptionless law links them. So, given 4, mental and
physical properties are distinct. 6 says that c causes e only if there are
singular descriptions, D of c and DH of e, and a “strict” causal law, L, such
that L and ‘D occurred’ entail ‘D caused D'’. 6 and the second part of 4 entail
that physical events have only physical causes and that all event causation is
physically grounded. Given the parallel between 13 and 4 6, it may seem that
the latter, too, are incompatible. But Davidson shows that they all can be true
if and only if mental events are identical to physical events. Let us say that
an event e is a physical event if and only if e satisfies a basic physical
predicate that is, a physical predicate appearing in a “strict” law. Since only
physical predicates or predicates expressing properties reducible to basic
physical properties appear in “strict” laws, every event that enters into
causal relations satisfies a basic physical predicate. So, those mental events
which enter into causal relations are also physical events. Still, the
anomalous monist is committed only to a partial endorsement of 1. The mental
and physical are distinct insofar as they are not linked by strict law but they are not distinct insofar as mental
events are in fact physical events.
decidability, as a
property of sets, the existence of an effective procedure a “decision
procedure” which, when applied to any object, determines whether or not the
object belongs to the set. A theory or logic is decidable if and only if the
set of its theorems is. Decidability is proved by describing a decision
procedure and showing that it works. The truth table method, for example,
establishes that classical propositional logic is decidable. To prove that
something is not decidable requires a more precise characterization of the
notion of effective procedure. Using one such characterization for which there
is ample evidence, Church proved that classical predicate logic is not
decidable.
decision theory, the
theory of rational decision, often called “rational choice theory” in political
science and other social sciences. The basic idea probably Pascal’s was
published at the end of Arnaud’s Port-Royal Logic 1662: “To judge what one must
do to obtain a good or avoid an evil one must consider not only the good and
the evil in itself but also the probability of its happening or not happening,
and view geometrically the proportion that all these things have together.”
Where goods and evils are monetary, Daniel Bernoulli 1738 spelled the idea out
in terms of expected utilities as figures of merit for actions, holding that
“in the absence of the unusual, the utility resulting from a fixed small
increase in wealth will be inversely proportional to the quantity of goods
previously possessed.” This was meant to solve the St. Petersburg paradox:
Peter tosses a coin . . . until it should land “heads” [on toss n]. . . . He
agrees to give Paul one ducat if he gets “heads” on the very first throw [and]
with each additional throw the number of ducats he must pay is doubled. . . .
Although the standard calculation shows that the value of Paul’s expectation
[of gain] is infinitely great [i.e., the sum of all possible gains $
probabilities, 2n/2 $ ½n], it has . . . to be admitted that any fairly
reasonable man would sell his chance, with great pleasure, for twenty ducats.
In this case Paul’s expectation of utility is indeed finite on Bernoulli’s
assumption of inverse proportionality; but as Karl Menger observed 1934,
Bernoulli’s solution fails if payoffs are so large that utilities are inversely
proportional to probabilities; then only boundedness of utility scales resolves
the paradox. Bernoulli’s idea of diminishing marginal utility of wealth
survived in the neoclassical texts of W. S. Jevons 1871, Alfred Marshall 1890, and
A. C. Pigou 1920, where personal utility judgment was understood to cause
preference. But in the 1930s, operationalistic arguments of John Hicks and R.
G. D. Allen persuaded economists that on the contrary, 1 utility is no cause
but a description, in which 2 the numbers indicate preference order but not
intensity. In their Theory of Games and Economic Behavior 1946, John von
Neumann and Oskar Morgenstern undid 2 by pushing 1 further: ordinal preferences
among risky prospects were now seen to be describable on “interval” scales of
subjective utility like the Fahrenheit and Celsius scales for temperature, so
that once utilities, e.g., 0 and 1, are assigned to any prospect and any
preferred one, utilities of all prospects are determined by overall preferences
among gambles, i.e., probability distributions over prospects. Thus, the
utility midpoint between two prospects is marked by the distribution assigning
probability ½ to each. In fact, Ramsey had done that and more in a
little-noticed essay “Truth and Probability,” 1931 teasing subjective
probabilities as well as utilities out of ordinal preferences among gambles. In
a form independently invented by L. J. Savage Foundations of Statistics, 1954,
this approach is now widely accepted as a basis for rational decision analysis.
The 1968 book of that title by Howard Raiffa became a theoretical centerpiece
of M.B.A. curricula, whose graduates diffused it through industry, government,
and the military in a simplified format for defensible decision making, namely,
“costbenefit analyses,” substituting expected numbers of dollars, deaths, etc.,
for preference-based expected utilities. Social choice and group decision form
the native ground of interpersonal comparison of personal utilities. Thus, John
C. Harsanyi 1955 proved that if 1 individual and social preferences all satisfy
the von Neumann-Morgenstern axioms, and 2 society is indifferent between two
prospects whenever all individuals are, and 3 society prefers one prospect to
another whenever someone does and nobody has the opposite preference, then
social utilities are expressible as sums of individual utilities on interval
scales obtained by stretching or compressing the individual scales by amounts
determined by the social preferences. Arguably, the theorem shows how to derive
interpersonal comparisons of individual preference intensities from social
preference orderings that are thought to treat individual preferences on a par.
Somewhat earlier, Kenneth Arrow had written that “interpersonal comparison of
utilities has no meaning and, in fact, there is no meaning relevant to welfare
economics in the measurability of individual utility” Social Choice and
Individual Values, 1951 a position later
abandoned P. Laslett and W. G. Runciman, eds., Philosophy, Politics and
Society, 1967. Arrow’s “impossibility theorem” is illustrated by cyclic
preferences observed by Condorcet in 1785 among candidates A, B, C of voters 1,
2, 3, who rank them ABC, BCA, CAB, respectively, in decreasing order of
preference, so that majority rule yields intransitive preferences for the group
of three, of whom two 1, 3 prefer A to B and two 1, 2 prefer B to C but two 2,
3 prefer C to A. In general, the theorem denies existence of technically
democratic schemes for forming social preferences from citizens’ preferences. A
clause tendentiously called “independence of irrelevant alternatives” in the
definition of ‘democratic’ rules out appeal to preferences among non-candidates
as a way to form social preferences among candidates, thus ruling out the
preferences among gambles used in Harsanyi’s theorem. See John Broome, Weighing
Goods, 1991, for further information and references. Savage derived the agent’s
probabilities for states as well as utilities for consequences from preferences
among abstract acts, represented by deterministic assignments of consequences
to states. An act’s place in the preference ordering is then reflected by its
expected utility, a probability-weighted average of the utilities of its
consequences in the various states. Savage’s states and consequences formed
distinct sets, with every assignment of consequences to states constituting an
act. While Ramsey had also taken acts to be functions from states to
consequences, he took consequences to be propositions sets of states, and assigned
utilities to states, not consequences. A further step in that direction
represents acts, too, by propositions see Ethan Bolker, Functions Resembling
Quotients of Measures, Microfilms, 1965;
and Richard Jeffrey, The Logic of Decision, 1965, 1990. Bolker’s representation
theorem states conditions under which preferences between truth of propositions
determine probabilities and utilities nearly enough to make the position of a
proposition in one’s preference ranking reflect its “desirability,” i.e., one’s
expectation of utility conditionally on it. decision theory decision theory
208 208 Alongside such basic properties
as transitivity and connexity, a workhorse among Savage’s assumptions was the
“sure-thing principle”: Preferences among acts having the same consequences in
certain states are unaffected by arbitrary changes in those consequences. This
implies that agents see states as probabilistically independent of acts, and
therefore implies that an act cannot be preferred to one that dominates it in
the sense that the dominant act’s consequences in each state have utilities at
least as great as the other’s. Unlike the sure thing principle, the principle
‘Choose so as to maximize CEU conditional expectation of utility’ rationalizes
action aiming to enhance probabilities of preferred states of nature, as in
quitting cigarettes to increase life expectancy. But as Nozick pointed out in
1969, there are problems in which choiceworthiness goes by dominance rather
than CEU, as when the smoker like R. A. Fisher in 1959 believes that the
statistical association between smoking and lung cancer is due to a genetic
allele, possessors of which are more likely than others to smoke and to
contract lung cancer, although among them smokers are not especially likely to
contract lung cancer. In such “Newcomb” problems choices are ineffectual signs
of conditions that agents would promote or prevent if they could. Causal
decision theories modify the CEU formula to obtain figures of merit
distinguishing causal efficacy from evidentiary significance e.g., replacing conditional probabilities by
probabilities of counterfactual conditionals; or forming a weighted average of
CEU’s under all hypotheses about causes, with agents’ unconditional
probabilities of hypotheses as weights; etc. Mathematical statisticians leery
of subjective probability have cultivated Abraham Wald’s Theory of Statistical
Decision Functions 1950, treating statistical estimation, experimental design,
and hypothesis testing as zero-sum “games against nature.” For an account of
the opposite assimilation, of game theory to probabilistic decision theory, see
Skyrms, Dynamics of Rational Deliberation 1990. The “preference logics” of
Sören Halldén, The Logic of ‘Better’ 1957, and G. H. von Wright, The Logic of Preference
1963, sidestep probability. Thus, Halldén holds that when truth of p is
preferred to truth of q, falsity of q must be preferred to falsity of p, and
von Wright with Aristotle holds that “this is more choiceworthy than that if
this is choiceworthy without that, but that is not choiceworthy without this”
Topics III, 118a. Both principles fail in the absence of special probabilistic
assumptions, e.g., equiprobability of p with q. Received wisdom counts decision
theory clearly false as a description of human behavior, seeing its proper
status as normative. But some, notably Davidson, see the theory as constitutive
of the very concept of preference, so that, e.g., preferences can no more be
intransitive than propositions can be at once true and false.
deconstruction, a
demonstration of the incompleteness or incoherence of a philosophical position
using concepts and principles of argument whose meaning and use is legitimated
only by that philosophical position. A deconstruction is thus a kind of internal
conceptual critique in which the critic implicitly and provisionally adheres to
the position criticized. The early work of Derrida is the source of the term
and provides paradigm cases of its referent. That deconstruction remains within
the position being discussed follows from a fundamental deconstructive argument
about the nature of language and thought. Derrida’s earliest deconstructions
argue against the possibility of an interior “language” of thought and
intention such that the senses and referents of terms are determined by their
very nature. Such terms are “meanings” or logoi. Derrida calls accounts that
presuppose such magical thought-terms “logocentric.” He claims, following
Heidegger, that the conception of such logoi is basic to the concepts of Western
metaphysics, and that Western metaphysics is fundamental to our cultural
practices and languages. Thus there is no “ordinary language” uncontaminated by
philosophy. Logoi ground all our accounts of intention, meaning, truth, and
logical connection. Versions of logoi in the history of philosophy range from
Plato’s Forms through the self-interpreting ideas of the empiricists to
Husserl’s intentional entities. Thus Derrida’s fullest deconstructions are of
texts that give explicit accounts of logoi, especially his discussion of
Husserl in Speech and Phenomena. There, Derrida argues that meanings that are
fully present to consciousness are in decision tree deconstruction 209 209 principle impossible. The idea of a meaning
is the idea of a repeatable ideality. But “repeatability” is not a feature that
can be present. So meanings, as such, cannot be fully before the mind.
Selfinterpreting logoi are an incoherent supposition. Without logoi, thought
and intention are merely wordlike and have no intrinsic connection to a sense
or a referent. Thus “meaning” rests on connections of all kinds among pieces of
language and among our linguistic interactions with the world. Without logoi,
no special class of connections is specifically “logical.” Roughly speaking, Derrida
agrees with Quine both on the nature of meaning and on the related view that
“our theory” cannot be abandoned all at once. Thus a philosopher must by and
large think about a logocentric philosophical theory that has shaped our
language in the very logocentric terms that that theory has shaped. Thus
deconstruction is not an excision of criticized doctrines, but a much more
complicated, self-referential relationship. Deconstructive arguments work out
the consequences of there being nothing helpfully better than words, i.e., of
thoroughgoing nominalism. According to Derrida, without logoi fundamental
philosophical contrasts lose their principled foundations, since such contrasts
implicitly posit one term as a logos relative to which the other side is defective.
Without logos, many contrasts cannot be made to function as principles of the
sort of theory philosophy has sought. Thus the contrasts between metaphorical
and literal, rhetoric and logic, and other central notions of philosophy are
shown not to have the foundation that their use presupposes.
Dedekind, Richard
18311916, German mathematician, one of the most important figures in the
mathematical analysis of foundational questions that took place in the late
nineteenth century. Philosophically, three things are interesting about
Dedekind’s work: 1 the insistence that the fundamental numerical systems of
mathematics must be developed independently of spatiotemporal or geometrical
notions; 2 the insistence that the numbers systems rely on certain mental capacities
fundamental to thought, in particular on the capacity of the mind to “create”;
and 3 the recognition that this “creation” is “creation” according to certain
key properties, properties that careful mathematical analysis reveals as
essential to the subject matter. 1 is a concern Dedekind shared with Bolzano,
Cantor, Frege, and Hilbert; 2 sets Dedekind apart from Frege; and 3 represents
a distinctive shift toward the later axiomatic position of Hilbert and somewhat
away from the concern with the individual nature of the central abstract
mathematical objects which is a central concern of Frege. Much of Dedekind’s
position is sketched in the Habilitationsrede of 1854, the procedure there
being applied in outline to the extension of the positive whole numbers to the
integers, and then to the rational field. However, the two works best known to
philosophers are the monographs on irrational numbers Stetigkeit und
irrationale Zahlen, 1872 and on natural numbers Was sind und was sollen die
Zahlen?, 1888, both of which pursue the procedure advocated in 1854. In both we
find an “analysis” designed to uncover the essential properties involved,
followed by a “synthesis” designed to show that there can be such systems, this
then followed by a “creation” of objects possessing the properties and nothing
more. In the 1872 work, Dedekind suggests that the essence of continuity in the
reals is that whenever the line is divided into two halves by a cut, i.e., into
two subsets A1 and A2 such that if p 1 A1 and q 1 A2, then p ‹ q and, if p 1 A1
and q ‹ p, then q 1 A1, and if p 1 A2 and q
p, then q 1 A2 as well, then there is real number r which “produces”
this cut, i.e., such that A1 % {p; p ‹ r}, and A2 % {p: r m p}. The task is then
to characterize the real numbers so that this is indeed true of them. Dedekind
shows that, whereas the rationals themselves do not have this property, the
collection of all cuts in the rationals does. Dedekind then “defines” the
irrationals through this observation, not directly as the cuts in the rationals
themselves, as was done later, but rather through the “creation” of “new
irrational numbers” to correspond to those rational cuts not hitherto
“produced” by a number. The 1888 work starts from the notion of a “mapping” of
one object onto another, which for Dedekind is necessary for all exact thought.
Dedekind then develops the notion of a one-toone into mapping, which is then
used to characterize infinity “Dedekind infinity”. Using the fundamental notion
of a chain, Dedekind characterizes the notion of a “simply infinite system,”
thus one that is isomorphic to the natural number sequence. Thus, he succeeds
in the goal set out in the 1854 lecture: isolating precisely the characteristic
properties of the natural number system. But do simply infinite systems, in
particular the natural number system, exist? Dedekind now argues: Any infinite
system must Dedekind, Richard Dedekind, Richard 210 210 contain a simply infinite system Theorem
72. Correspondingly, Dedekind sets out to prove that there are infinite systems
Theorem 66, for which he uses an infamous argument reminiscent of Bolzano’s
from thirty years earlier involving “my thought-world,” etc. It is generally
agreed that the argument does not work, although it is important to remember
Dedekind’s wish to demonstrate that since the numbers are to be free creations
of the human mind, his proofs should rely only on the properties of the mental.
The specific act of “creation,” however, comes in when Dedekind, starting from
any simply infinite system, abstracts from the “particular properties” of this,
claiming that what results is the simply infinite system of the natural
numbers.
de dicto, of what is said
or of the proposition, as opposed to de re, of the thing. Many philosophers
believe the following ambiguous, depending on whether they are interpreted de
dicto or de re: 1 It is possible that the number of U.S. states is even. 2
Galileo believes that the earth moves. Assume for illustrative purposes that
there are propositions and properties. If 1 is interpreted as de dicto, it
asserts that the proposition that the number of U.S. states is even is a
possible truth something true, since
there are in fact fifty states. If 1 is interpreted as de re, it asserts that
the actual number of states fifty has the property of being possibly even something essentialism takes to be true.
Similarly for 2; it may mean that Galileo’s belief has a certain content that the earth moves or that Galileo believes, of the earth, that
it moves. More recently, largely due to Castañeda and John Perry, many
philosophers have come to believe in de se “of oneself” ascriptions, distinct
from de dicto and de re. Suppose, while drinking with others, I notice that
someone is spilling beer. Later I come to realize that it is I. I believed at
the outset that someone was spilling beer, but didn’t believe that I was. Once
I did, I straightened my glass. The distinction between de se and de dicto
attributions is supposed to be supported by the fact that while de dicto
propositions must be either true or false, there is no true proposition
embeddable within ‘I believe that . . .’ that correctly ascribes to me the
belief that I myself am spilling beer. The sentence ‘I am spilling beer’ will
not do, because it employs an “essential” indexical, ‘I’. Were I, e.g., to
designate myself other than by using ‘I’ in attributing the relevant belief to
myself, there would be no explanation of my straightening my glass. Even if I
believed de re that LePore is spilling beer, this still does not account for
why I lift my glass. For I might not know I am LePore. On the basis of such
data, some philosophers infer that de se attributions are irreducible to de re
or de dicto attributions.
deduction, a finite
sequence of sentences whose last sentence is a conclusion of the sequence the
one said to be deduced and which is such that each sentence in the sequence is
an axiom or a premise or follows from preceding sentences in the sequence by a
rule of inference. A synonym is ‘derivation’. Deduction is a system-relative
concept. It makes sense to say something is a deduction only relative to a
particular system of axioms and rules of inference. The very same sequence of
sentences might be a deduction relative to one such system but not relative to
another. The concept of deduction is a generalization of the concept of proof.
A proof is a finite sequence of sentences each of which is an axiom or follows
from preceding sentences in the sequence by a rule of inference. The last
sentence in the sequence is a theorem. Given that the system of axioms and
rules of inference are effectively specifiable, there is an effective procedure
for determining, whenever a finite sequence of sentences is given, whether it
is a proof relative to that system. The notion of theorem is not in general
effective decidable. For there may be no method by which we can always find a
proof of a given sentence or determine that none exists. The concepts of
deduction and consequence are distinct. The first is a syntactical; the second
is semantical. It was a discovery that, relative to the axioms and rules of
inference of classical logic, a sentence S is deducible from a set of sentences
K provided that S is a consequence of K. Compactness is an important
consequence of this discovery. It is trivial that sentence S is deducible from
K just in case S is deducible from Dedekind cut deductíon 211 211 some finite subset of K. It is not
trivial that S is a consequence of K just in case S is a consequence of some
finite subset of K. This compactness property had to be shown. A system of
natural deduction is axiomless. Proofs of theorems within a system are
generally easier with natural deduction. Proofs of theorems about a system,
such as the results mentioned in the previous paragraph, are generally easier
if the system has axioms. In a secondary sense, ‘deduction’ refers to an
inference in which a speaker claims the conclusion follows necessarily from the
premises.
deduction theorem, a
result about certain systems of formal logic relating derivability and the
conditional. It states that if a formula B is derivable from A and possibly
other assumptions, then the formula APB is derivable without the assumption of
A: in symbols, if G 4 {A} Y B then GYAPB. The thought is that, for example, if
Socrates is mortal is derivable from the assumptions All men are mortal and
Socrates is a man, then If Socrates is a man he is mortal is derivable from All
men are mortal. Likewise, If all men are mortal then Socrates is mortal is
derivable from Socrates is a man. In general, the deduction theorem is a
significant result only for axiomatic or Hilbert-style formulations of logic.
In most natural deduction formulations a rule of conditional proof explicitly
licenses derivations of APB from G4{A}, and so there is nothing to prove.
default logic, a formal
system for reasoning with defaults, developed by Raymond Reiter in 1980.
Reiter’s defaults have the form ‘P:MQ1 , . . . , MQn/R’, read ‘If P is believed
and Q1 . . . Qn are consistent with one’s beliefs, then R may be believed’.
Whether a proposition is consistent with one’s beliefs depends on what defaults
have already been applied. Given the defaults P:MQ/Q and R:M-Q/-Q, and the
facts P and R, applying the first default yields Q while applying the second
default yields -Q. So applying either default blocks the other. Consequently, a
default theory may have several default extensions. Normal defaults having the
form P:MQ/Q, useful for representing simple cases of nonmonotonic reasoning,
are inadequate for more complex cases. Reiter produces a reasonably clean proof
theory for normal default theories and proves that every normal default theory
has an extension.
defeasibility, a property
that rules, principles, arguments, or bits of reasoning have when they might be
defeated by some competitor. For example, the epistemic principle ‘Objects
normally have the properties they appear to have’ or the normative principle
‘One should not lie’ are defeated, respectively, when perception occurs under
unusual circumstances e.g., under colored lights or when there is some
overriding moral consideration e.g., to prevent murder. Apparently declarative
sentences such as ‘Birds typically fly’ can be taken in part as expressing
defeasible rules: take something’s being a bird as evidence that it flies.
Defeasible arguments and reasoning inherit their defeasibility from the use of
defeasible rules or principles. Recent analyses of defeasibility include
circumscription and default logic, which belong to the broader category of
non-monotonic logic. The rules in several of these formal systems contain
special antecedent conditions and are not truly defeasible since they apply
whenever their conditions are satisfied. Rules and arguments in other
non-monotonic systems justify their conclusions only when they are not defeated
by some other fact, rule, or argument. John Pollock distinguishes between
rebutting and undercutting defeaters. ‘Snow is not normally red’ rebuts in
appropriate circumstances the principle ‘Things that look red normally are red’,
while ‘If the available light is red, do not use the principle that things that
look red normally are red’ only undercuts the embedded rule. Pollock has
infludeduction, natural defeasibility 212
212 enced most other work on formal systems for defeasible
reasoning.
definiendum plural:
definienda, the expression that is defined in a definition. The expression that
gives the definition is the definiens plural: definientia. In the definition
father, male parent, ‘father’ is the definiendum and ‘male parent’ is the
definiens. In the definition ‘A human being is a rational animal’, ‘human
being’ is the definiendum and ‘rational animal’ is the definiens. Similar terms
are used in the case of conceptual analyses, whether they are meant to provide
synonyms or not; ‘definiendum’ for ‘analysandum’ and ‘definiens’ for
‘analysans’. In ‘x knows that p if and only if it is true that p, x believes
that p, and x’s belief that p is properly justified’, ‘x knows that p’ is the
analysandum and ‘it is true that p, x believes that p, and x’s belief that p is
properly justified’ is the analysans.
definist, someone who
holds that moral terms, such as ‘right’, and evaluative terms, such as
‘good’ in short, normative terms are definable in non-moral, non-evaluative
i.e., non-normative terms. William Frankena offers a broader account of a
definist as one who holds that ethical terms are definable in non-ethical
terms. This would allow that they are definable in nonethical but evaluative
terms say, ‘right’ in terms of what is
non-morally intrinsically good. Definists who are also naturalists hold that
moral terms can be defined by terms that denote natural properties, i.e.,
properties whose presence or absence can be determined by observational means.
They might define ‘good’ as ‘what conduces to pleasure’. Definists who are not
naturalists will hold that the terms that do the defining do not denote natural
properties, e.g., that ‘right’ means ‘what is commanded by God’.
definition, specification
of the meaning or, alternatively, conceptual content, of an expression. For
example, ‘period of fourteen days’ is a definition of ‘fortnight’. Definitions
have traditionally been judged by rules like the following: 1 A definition
should not be too narrow. ‘Unmarried adult male psychiatrist’ is too narrow a
definition for ‘bachelor’, for some bachelors are not psychiatrists. ‘Having
vertebrae and a liver’ is too narrow for ‘vertebrate’, for, even though all
actual vertebrate things have vertebrae and a liver, it is possible for a
vertebrate thing to lack a liver. 2 A definition should not be too broad.
‘Unmarried adult’ is too broad a definition for ‘bachelor’, for not all
unmarried adults are bachelors. ‘Featherless biped’ is too broad for ‘human
being’, for even though all actual featherless bipeds are human beings, it is
possible for a featherless biped to be non-human. 3 The defining expression in
a definition should ideally exactly match the degree of vagueness of the
expression being defined except in a precising definition. ‘Adult female’ for
‘woman’ does not violate this rule, but ‘female at least eighteen years old’
for ‘woman’ does. 4 A definition should not be circular. If ‘desirable’ defines
‘good’ and ‘good’ defines ‘desirable’, these definitions are circular.
Definitions fall into at least the following kinds: analytical definition:
definition whose corresponding biconditional is analytic or gives an analysis
of the definiendum: e.g., ‘female fox’ for ‘vixen’, where the corresponding
biconditional ‘For any x, x is a vixen if and only if x is a female fox’ is
analytic; ‘true in all possible worlds’ for ‘necessarily true’, where the
corresponding biconditional ‘For any P, P is necessarily true if and only if P
is true in all possible worlds’ gives an analysis of the definiendum. contextual
definition: definition of an expression as it occurs in a larger expression:
e.g., ‘If it is not the case that Q, then P’ contextually defines ‘unless’ as
it occurs in ‘P unless Q’; ‘There is at least one entity that is F and is
identical with any entity that is F’ contexdefeat of reasons definition
213 213 tually defines ‘exactly one’ as
it occurs in ‘There is exactly one F’. Recursive definitions see below are an
important variety of contextual definition. Another important application of
contextual definition is Russell’s theory of descriptions, which defines ‘the’
as it occurs in contexts of the form ‘The so-and-so is such-and-such’.
coordinative definition: definition of a theoretical term by non-theoretical
terms: e.g., ‘the forty-millionth part of the circumference of the earth’ for
‘meter’. definition by genus and species: When an expression is said to be
applicable to some but not all entities of a certain type and inapplicable to
all entities not of that type, the type in question is the genus, and the
subtype of all and only those entities to which the expression is applicable is
the species: e.g., in the definition ‘rational animal’ for ‘human’, the type
animal is the genus and the subtype human is the species. Each species is
distinguished from any other of the same genus by a property called the
differentia. definition in use: specification of how an expression is used or
what it is used to express: e.g., ‘uttered to express astonishment’ for ‘my
goodness’. Wittgenstein emphasized the importance of definition in use in his
use theory of meaning. definition per genus et differentiam: definition by
genus and difference; same as definition by genus and species. explicit
definition: definition that makes it clear that it is a definition and identifies
the expression being defined as such: e.g., ‘Father’ means ‘male parent’; ‘For
any x, x is a father by definition if and only if x is a male parent’. implicit
definition: definition that is not an explicit definition. lexical definition:
definition of the kind commonly thought appropriate for dictionary definitions
of natural language terms, namely, a specification of their conventional
meaning. nominal definition: definition of a noun usually a common noun, giving
its linguistic meaning. Typically it is in terms of macrosensible
characteristics: e.g., ‘yellow malleable metal’ for ‘gold’. Locke spoke of
nominal essence and contrasted it with real essence. ostensive definition:
definition by an example in which the referent is specified by pointing or
showing in some way: e.g., “ ‘Red’ is that color,” where the word ‘that’ is
accompanied with a gesture pointing to a patch of colored cloth; “ ‘Pain’ means
this,” where ‘this’ is accompanied with an insertion of a pin through the
hearer’s skin; “ ‘Kangaroo’ applies to all and only animals like that,” where
‘that’ is accompanied by pointing to a particular kangaroo. persuasive
definition: definition designed to affect or appeal to the psychological states
of the party to whom the definition is given, so that a claim will appear more
plausible to the party than it is: e.g., ‘self-serving manipulator’ for
‘politician’, where the claim in question is that all politicians are immoral.
precising definition: definition of a vague expression intended to reduce its
vagueness: e.g., ‘snake longer than half a meter and shorter than two meters’
for ‘snake of average length’; ‘having assets ten thousand times the median
figure’ for ‘wealthy’. prescriptive definition: stipulative definition that, in
a recommendatory way, gives a new meaning to an expression with a previously
established meaning: e.g., ‘male whose primary sexual preference is for other
males’ for ‘gay’. real definition: specification of the metaphysically
necessary and sufficient condition for being the kind of thing a noun usually a
common noun designates: e.g., ‘element with atomic number 79’ for ‘gold’. Locke
spoke of real essence and contrasted it with nominal essence. recursive
definition also called inductive definition and definition by recursion: definition
in three clauses in which 1 the expression defined is applied to certain
particular items the base clause; 2 a rule is given for reaching further items
to which the expression applies the recursive, or inductive, clause; and 3 it
is stated that the expression applies to nothing else the closure clause. E.g.,
‘John’s parents are John’s ancestors; any parent of John’s ancestor is John’s
ancestor; nothing else is John’s ancestor’. By the base clause, John’s mother
and father are John’s ancestors. Then by the recursive clause, John’s mother’s
parents and John’s father’s parents are John’s ancestors; so are their parents,
and so on. Finally, by the last closure clause, these people exhaust John’s
ancestors. The following defines multiplication in terms of definition
definition 214 214 addition: ‘0 $ n %
0. m ! 1 $ n % m $ n ! n. Nothing else is the result of multiplying integers’.
The base clause tells us, e.g., that 0 $ 4 % 0. The recursive clause tells us,
e.g., that 0 ! 1 $ 4 % 0 $ 4 ! 4. We then know that 1 $ 4 % 0 ! 4 % 4.
Likewise, e.g., 2 $ 4 % 1 ! 1 $ 4 % 1 $ 4 ! 4 % 4 ! 4 % 8. stipulative
definition: definition regardless of the ordinary or usual conceptual content
of the expression defined. It postulates a content, rather than aiming to capture
the content already associated with the expression. Any explicit definition
that introduces a new expression into the language is a stipulative definition:
e.g., “For the purpose of our discussion ‘existent’ means ‘perceivable’ “; “By
‘zoobeedoobah’ we shall mean ‘vain millionaire who is addicted to alcohol’.”
synonymous definition: definition of a word or other linguistic expression by
another word synonymous with it: e.g., ‘buy’ for ‘purchase’; ‘madness’ for
‘insanity’.
degenerate case, an
expression used more or less loosely to indicate an individual or class that
falls outside of a given background class to which it is otherwise very closely
related, often in virtue of an ordering of a more comprehensive class. A
degenerate case of one class is often a limiting case of a more comprehensive
class. Rest zero velocity is a degenerate case of motion positive velocity
while being a limiting case of velocity. The circle is a degenerate case of an
equilateral and equiangular polygon. In technical or scientific contexts, the
conventional term for the background class is often “stretched” to cover
otherwise degenerate cases. A figure composed of two intersecting lines is a
degenerate case of hyperbola in the sense of synthetic geometry, but it is a
limiting case of hyperbola in the sense of analytic geometry. The null set is a
degenerate case of set in an older sense but a limiting case of set in a modern
sense. A line segment is a degenerate case of rectangle when rectangles are
ordered by ratio of length to width, but it is not a limiting case under these
conditions.
degree, also called
arity, adicity, in formal languages, a property of predicate and function
expressions that determines the number of terms with which the expression is
correctly combined to yield a well-formed expression. If an expression combines
with a single term to form a wellformed expression, it is of degree one
monadic, singulary. Expressions that combine with two terms are of degree two
dyadic, binary, and so on. Expressions of degree greater than or equal to two
are polyadic. The formation rules of a formalized language must effectively
specify the degrees of its primitive expressions as part of the effective
determination of the class of wellformed formulas. Degree is commonly indicated
by an attached superscript consisting of an Arabic numeral. Formalized
languages have been studied that contain expressions having variable degree or
variable adicity and that can thus combine with any finite number of terms. An
abstract relation that would be appropriate as extension of a predicate
expression is subject to the same terminology, and likewise for function
expressions and their associated functions.
degree of unsolvability,
a maximal set of equally complex sets of natural numbers, with comparative
complexity of sets of natural numbers construed as recursion-theoretic
reducibility ordering. Recursion theorists investigate various notions of
reducibility between sets of natural numbers, i.e., various ways of filling in
the following schematic definition. For sets A and B of natural numbers: A is
reducible to B iff if and only if there is an algorithm whereby each membership
question about A e.g., ‘17 1 A?’ could be answered allowing consultation of an
definition, contextual degree of unsolvability 215 215 “oracle” that would correctly answer
each membership question about B. This does not presuppose that there is a
“real” oracle for B; the motivating idea is counterfactual: A is reducible to B
iff: if membership questions about B were decidable then membership questions
about A would also be decidable. On the other hand, the mathematical
definitions of notions of reducibility involve no subjunctive conditionals or
other intensional constructions. The notion of reducibility is determined by constraints
on how the algorithm could use the oracle. Imposing no constraints yields
T-reducibility ‘T’ for Turing, the most important and most studied notion of
reducibility. Fixing a notion r of reducibility: A is r-equivalent to B iff A
is r-reducible to B and B is rreducible to A. If r-reducibility is transitive,
r-equivalence is an equivalence relation on the class of sets of natural
numbers, one reflecting a notion of equal complexity for sets of natural
numbers. A degree of unsolvability relative to r an r-degree is an equivalence
class under that equivalence relation, i.e., a maximal class of sets of natural
numbers any two members of which are r-equivalent, i.e., a maximal class of
equally complex in the sense of r-reducibility sets of natural numbers. The
r-reducibility-ordering of sets of natural numbers transfers to the rdegrees:
for d and dH r-degrees, let d m, dH iff for some A 1 d and B 1 dH A is
r-reducible to B. The study of r-degrees is the study of them under this
ordering. The degrees generated by T-reducibility are the Turing degrees.
Without qualification, ‘degree of unsolvability’ means ‘Turing degree’. The
least Tdegree is the set of all recursive i.e., using Church’s thesis, solvable
sets of natural numbers. So the phrase ‘degree of unsolvability’ is slightly
misleading: the least such degree is “solvability.” By effectively coding
functions from natural numbers to natural numbers as sets of natural numbers,
we may think of such a function as belonging to a degree: that of its coding set.
Recursion theorists have extended the notions of reducibility and degree of
unsolvability to other domains, e.g. transfinite ordinals and higher types
taken over the natural numbers.
deism, the view that true
religion is natural religion. Some self-styled Christian deists accepted
revelation although they argued that its content is essentially the same as
natural religion. Most deists dismissed revealed religion as a fiction. God
wants his creatures to be happy and has ordained virtue as the means to it.
Since God’s benevolence is disinterested, he will ensure that the knowledge
needed for happiness is universally accessible. Salvation cannot, then, depend
on special revelation. True religion is an expression of a universal human
nature whose essence is reason and is the same in all times and places.
Religious traditions such as Christianity and Islam originate in credulity,
political tyranny, and priestcraft, which corrupt reason and overlay natural
religion with impurities. Deism is largely a seventeenth- and
eighteenth-century phenomenon and was most prominent in England. Among the more
important English deists were John Toland 16701722, Anthony Collins 16761729,
Herbert of Cherbury 15831648, Matthew Tindal 16571733, and Thomas Chubb
16791747. Continental deists included Voltaire and Reimarus. Thomas Paine and
Elihu Palmer 17641806 were prominent American deists. Orthodox writers in this
period use ‘deism’ as a vague term of abuse. By the late eighteenth century,
the term came to mean belief in an “absentee God” who creates the world,
ordains its laws, and then leaves it to its own devices.
de Maistre, Joseph-Marie
17531821, French political theorist, diplomat, and Roman Catholic exponent of
theocracy. He was educated by the Jesuits in Turin. His counterrevolutionary
political philosophy aimed at restoring the foundations of morality, the
family, society, and the state in postrevolutionary Europe. Against
Enlightenment ideals, he reclaimed Thomism, defended the hereditary and
absolute monarchy, and championed ultramontanism The Pope, 1821. Considerations
on France 1796 argues that the decline of moral and religious values was
responsible for the “satanic” 1789 revolution. Hence Christianity and
Enlightenment philosophy were engaged in a fight to the death that he claimed
the church would eventually win. Deeply pessimistic about human nature, the
Essay on the Generating Principle of Political Constitutions 1810 traces the
origin of authority in the human craving for order and discipline. Saint deism
de Maistre, Joseph-Marie 216 216
Petersburg Evenings 1821 urges philosophy to surrender to religion and reason
to faith. J.-L.S. demarcation, the line separating empirical science from
mathematics and logic, from metaphysics, and from pseudoscience. Science traditionally
was supposed to rely on induction, the formal disciplines including metaphysics
on deduction. In the verifiability criterion, the logical positivists
identified the demarcation of empirical science from metaphysics with the
demarcation of the cognitively meaningful from the meaningless, classifying
metaphysics as gibberish, and logic and mathematics, more charitably, as
without sense. Noting that, because induction is invalid, the theories of
empirical science are unverifiable, Popper proposed falsifiability as their
distinguishing characteristic, and remarked that some metaphysical doctrines,
such as atomism, are obviously meaningful. It is now recognized that science is
suffused with metaphysical ideas, and Popper’s criterion is therefore perhaps a
rather rough criterion of demarcation of the empirical from the nonempirical
rather than of the scientific from the non-scientific. It repudiates the
unnecessary task of demarcating the cognitively meaningful from the cognitively
meaningless.
demiurge from Grecian
demiourgos, ‘artisan’, ‘craftsman’, a deity who shapes the material world from
the preexisting chaos. Plato introduces the demiurge in his Timaeus. Because he
is perfectly good, the demiurge wishes to communicate his own goodness. Using
the Forms as a model, he shapes the initial chaos into the best possible image
of these eternal and immutable archetypes. The visible world is the result.
Although the demiurge is the highest god and the best of causes, he should not
be identified with the God of theism. His ontological and axiological status is
lower than that of the Forms, especially the Form of the Good. He is also
limited. The material he employs is not created by him. Furthermore, it is
disorderly and indeterminate, and thus partially resists his rational ordering.
In gnosticism, the demiurge is the ignorant, weak, and evil or else morally
limited cause of the cosmos. In the modern era the term has occasionally been
used for a deity who is limited in power or knowledge. Its first occurrence in
this sense appears to be in J. S. Mill’s Theism 1874.
Democritus c.460c.370
B.C., Grecian preSocratic philosopher. He was born at Abdera, in Thrace.
Building on Leucippus and his atomism, he developed the atomic theory in The
Little World-system and numerous other writings. In response to the Eleatics’
argument that the impossibility of not-being entailed that there is no change,
the atomists posited the existence of a plurality of tiny indivisible
beings the atoms and not-being
the void, or empty space. Atoms do not come into being or perish, but
they do move in the void, making possible the existence of a world, and indeed
of many worlds. For the void is infinite in extent, and filled with an infinite
number of atoms that move and collide with one another. Under the right
conditions a concentration of atoms can begin a vortex motion that draws in
other atoms and forms a spherical heaven enclosing a world. In our world there
is a flat earth surrounded by heavenly bodies carried by a vortex motion. Other
worlds like ours are born, flourish, and die, but their astronomical
configurations may be different from ours and they need not have living
creatures in them. The atoms are solid bodies with countless shapes and sizes,
apparently having weight or mass, and capable of motion. All other properties
are in some way derivative of these basic properties. The cosmic vortex motion
causes a sifting that tends to separate similar atoms as the sea arranges
pebbles on the shore. For instance heavier atoms sink to the center of the
vortex, and lighter atoms such as those of fire rise upward. Compound bodies
can grow by the aggregations of atoms that become entangled with one another.
Living things, including humans, originally emerged out of slime. Life is
caused by fine, spherical soul atoms, and living things die when these atoms
are lost. Human culture gradually evolved through chance discoveries and
imitations of nature. Because the atoms are invisible and the only real
properties are properties of atoms, we cannot have direct knowledge of
anything. Tastes, temperatures, and colors we know only “by convention.” In
general the senses cannot give us anything but “bastard” knowledge; but there
is a “legitimate” knowledge based on reason, which takes over where the senses
leave off presumably demonstrating that
there are atoms that the senses cannot testify of. Democritus offers a causal
theory of perception sometimes called
the theory of effluxes accounting for
tastes in terms of certain shapes of atoms and for sight in demarcation
Democritus 217 217 terms of
“effluences” or moving films of atoms that impinge on the eye. Drawing on both
atomic theory and conventional wisdom, Democritus develops an ethics of
moderation. The aim of life is equanimity euthumiê, a state of balance achieved
by moderation and proportionate pleasures. Envy and ambition are incompatible
with the good life. Although Democritus was one of the most prolific writers of
antiquity, his works were all lost. Yet we can still identify his atomic theory
as the most fully worked out of pre-Socratic philosophies. His theory of matter
influenced Plato’s Timaeus, and his naturalist anthropology became the
prototype for liberal social theories. Democritus had no immediate successors,
but a century later Epicurus transformed his ethics into a philosophy of
consolation founded on atomism. Epicureanism thus became the vehicle through
which atomic theory was transmitted to the early modern period.
De Morgan, Augustus
180671, prolific British mathematician, logician, and philosopher of
mathematics and logic. He is remembered chiefly for several lasting
contributions to logic and philosophy of logic, including discovery and
deployment of the concept of universe of discourse, the cofounding of
relational logic, adaptation of what are now known as De Morgan’s laws, and
several terminological innovations including the expression ‘mathematical
induction’. His main logical works, the monograph Formal Logic 1847 and the
series of articles “On the Syllogism” 184662, demonstrate wide historical and
philosophical learning, synoptic vision, penetrating originality, and disarming
objectivity. His relational logic treated a wide variety of inferences
involving propositions whose logical forms were significantly more complex than
those treated in the traditional framework stemming from Aristotle, e.g. ‘If
every doctor is a teacher, then every ancestor of a doctor is an ancestor of a
teacher’. De Morgan’s conception of the infinite variety of logical forms of
propositions vastly widens that of his predecessors and even that of his able
contemporaries such as Boole, Hamilton, Mill, and Whately. De Morgan did as
much as any of his contemporaries toward the creation of modern mathematical
logic.
De Morgan’s laws, the
logical principles - A 8 B S - A 7 - B, - A 7 B S - A 8 - B, - -A 8 - B S A 7
B, and - - A 7 - B S A 8 B, though the term is occasionally used to cover only
the first two.
Dennett, Daniel Clement
b.1942, American philosopher, author of books on topics in the philosophy of
mind, free will, and evolutionary biology, and tireless advocate of the
importance of philosophy for empirical work on evolution and on the nature of
the mind. Dennett is perhaps best known for arguing that a creature or, more
generally, a system, S, possesses states of mind if and only if the ascription
of such states to S facilitates explanation and prediction of S’s behavior The
Intentional Stance, 1987. S might be a human being, a chimpanzee, a desktop
computer, or a thermostat. In ascribing beliefs and desires to S we take up an
attitude toward S, the intentional stance. We could just as well although for
different purposes take up other stances: the design stance we understand S as
a kind of engineered system or the physical stance we regard S as a purely
physical system. It might seem that, although we often enough ascribe beliefs
and desires to desktop computers and thermostats, we do not mean to do so
literally as with people. Dennett’s
contention, however, is that there is nothing more nor less to having beliefs,
desires, and other states of mind than being explicable by reference to such
things. This, he holds, is not to demean beliefs, but only to affirm that to
have a belief is to be describable in this particular way. If you are so describable,
then it is true, literally true, that you have beliefs. Dennett extends this
approach to consciousness, which he views not as an inwardly observable
performance taking place in a “Cartesian Theater,” demonstration Dennett,
Daniel Clement 218 218 but as a story
we tell about ourselves, the compilation of “multiple drafts” concocted by
neural subsystems see Conciousness Explained, 1991. Elsewhere Darwin’s
Dangerous Idea, 1995 Dennett has argued that principles of Darwinian selection
apply to diverse domains including cosmology and human culture, and offered a
compatibilist account of free will with an emphasis on agents’ control over
their actions Elbow Room, 1984.
denotation, the thing or
things that an expression applies to; extension. The term is used in contrast
with ‘meaning’ and ‘connotation’. A pair of expressions may apply to the same
things, i.e., have the same denotation, yet differ in meaning: ‘triangle’,
‘trilateral’; ‘creature with a heart’, ‘creature with a kidney’; ‘bird’,
‘feathered earthling’; ‘present capital of France’, ‘City of Light’. If a term
does not apply to anything, some will call it denotationless, while others
would say that it denotes the empty set. Such terms may differ in meaning:
‘unicorn’, ‘centaur’, ‘square root of pi’. Expressions may apply to the same
things, yet bring to mind different associations, i.e., have different
connotations: ‘persistent’, ‘stubborn’, ‘pigheaded’; ‘white-collar employee’,
‘office worker’, ‘professional paper-pusher’; ‘Lewis Carroll’, ‘Reverend
Dodgson’. There can be confusion about the denotation-connotation terminology,
because this pair is used to make other contrasts. Sometimes the term
‘connotation’ is used more broadly, so that any difference of either meaning or
association is considered a difference of connotation. Then ‘creature with a
heart’ and ‘creature with a liver’ might be said to denote the same individuals
or sets but to connote different properties. In a second use, denotation is the
semantic value of an expression. Sometimes the denotation of a general term is
said to be a property, rather than the things having the property. This occurs
when the denotation-connotation terminology is used to contrast the property
expressed with the connotation. Thus ‘persistent’ and ‘pig-headed’ might be
said to denote the same property but differ in connotation.
deontic logic, the logic
of obligation and permission. There are three principal types of formal deontic
systems. 1 Standard deontic logic, or SDL, results from adding a pair of monadic
deontic operators O and P, read as “it ought to be that” and “it is permissible
that,” respectively, to the classical propositional calculus. SDL contains the
following axioms: tautologies of propositional logic, OA S - P - A, OA / - O -
A, OA / B / OA / OB, and OT, where T stands for any tautology. Rules of
inference are modus ponens and substitution. See the survey of SDL by Dagfinn
Follesdal and Risto Hilpinin in R. Hilpinin, ed., Deontic Logic, 1971. 2 Dyadic
deontic logic is obtained by adding a pair of dyadic deontic operators O / and P / , to be read as “it ought to be that
. . . , given that . . .” and “it is permissible that . . . , given that . . .
,” respectively. The SDL monadic operator O is defined as OA S OA/T; i.e., a
statement of absolute obligation OA becomes an obligation conditional on
tautologous conditions. A statement of conditional obligation OA/B is true
provided that some value realized at some B-world where A holds is better than
any value realized at any B-world where A does not hold. This axiological
construal of obligation is typically accompanied by these axioms and rules of
inference: tautologies of propositional logic, modus ponens, and substitution,
PA/C S - O-A/C, OA & B/C S [OA/C & OB/C], OA/C / PA/C, OT/C / OC/C, OT/C
/ OT/B 7 C, [OA/B & OA/C] / OA/B 7 C, [PB/B 7 C & OA/B 7 C] / OA/B, and
[P< is the negation of any tautology. See the comparison of alternative
dyadic systems in Lennart Aqvist, Introduction to Deontic Logic and the Theory
of Normative Systems, 1987. 3 Two-sorted deontic logic, due to Castañeda
Thinking and Doing, 1975, pivotally distinguishes between propositions, the
bearers of truth-values, and practitions, the contents of commands,
imperatives, requests, and such. Deontic operators apply to practitions,
yielding propositions. The deontic operators Oi, Pi, Wi, and li are read as “it
is obligatory i that,” “it is permissible i that,” “it is wrong i that,” and
“it is optional i denotation deontic logic 219
219 that,” respectively, where i stands for any of the various types of
obligation, permission, and so on. Let p stand for indicatives, where these
express propositions; let A and B stand for practitives, understood to express
practitions; and allow p* to stand for both indicatives and practitives. For
deontic definition there are PiA S - Oi - A, WiA S Oi - A, and LiA S - OiA
& - Oi - A. Axioms and rules of inference include p*, if p* has the form of
a truth-table tautology, OiA / - Oi - A, O1A / A, where O1 represents
overriding obligation, modus ponens for both indicatives and practitives, and
the rule that if p & A1 & . . . & An / B is a theorem, so too is p
& OiA1 & . . . & OiAn / OiB.
deontic paradoxes, the
paradoxes of deontic logic, which typically arise as follows: a certain set of
English sentences about obligation or permission appears logically consistent,
but when these same sentences are represented in a proposed system of deontic
logic the result is a formally inconsistent set. To illustrate, a formulation
is provided below of how two of these paradoxes beset standard deontic logic.
The contrary-to-duty imperative paradox, made famous by Chisholm Analysis,
1963, arises from juxtaposing two apparent truths: first, some of us sometimes
do what we should not do; and second, when such wrongful doings occur it is
obligatory that the best or a better be made of an unfortunate situation.
Consider this scenario. Art and Bill share an apartment. For no good reason Art
develops a strong animosity toward Bill. One evening Art’s animosity takes
over, and he steals Bill’s valuable lithographs. Art is later found out,
apprehended, and brought before Sue, the duly elected local
punishment-and-awards official. An inquiry reveals that Art is a habitual thief
with a history of unremitting parole violation. In this situation, it seems
that 14 are all true and hence mutually consistent: 1 Art steals from Bill. 2
If Art steals from Bill, Sue ought to punish Art for stealing from Bill. 3 It
is obligatory that if Art does not steal from Bill, Sue does not punish him for
stealing from Bill. 4 Art ought not to steal from Bill. Turning to standard
deontic logic, or SDL, let sstand for ‘Art steals from Bill’ and let p stand
for ‘Sue punishes Art for stealing from Bill’. Then 14 are most naturally
represented in SDL as follows: 1a s. 2a s / Op. 3a O- s / - p. 4a O - s. Of
these, 1a and 2a entail Op by propositional logic; next, given the SDL axiom OA
/ B / OA / OB, 3a implies O - s / O - p; but the latter, taken in conjunction
with 4a, entails O - p by propositional logic. In the combination of Op, O - p,
and the axiom OA / - O - A, of course, we have a formally inconsistent set. The
paradox of the knower, first presented by Lennart Bqvist Noûs, 1967, is
generated by these apparent truths: first, some of us sometimes do what we
should not do; and second, there are those who are obligated to know that such
wrongful doings occur. Consider the following scenario. Jones works as a
security guard at a local store. One evening, while Jones is on duty, Smith, a
disgruntled former employee out for revenge, sets the store on fire just a few
yards away from Jones’s work station. Here it seems that 13 are all true and
thus jointly consistent: 1 Smith set the store on fire while Jones was on duty.
2 If Smith set the store on fire while Jones was on duty, it is obligatory that
Jones knows that Smith set the store on fire. 3 Smith ought not set the store
on fire. Independently, as a consequence of the concept of knowledge, there is
the epistemic theorem that 4 The statement that Jones knows that Smith set the
store on fire entails the statement that Smith set the store on fire. Next,
within SDL 1 and 2 surely appear to imply: 5 It is obligatory that Jones knows
that Smith set the store on fire. But 4 and 5 together yield 6 Smith ought to set
the store on fire, given the SDL theorem that if A / B is a theorem, so is OA /
OB. And therein resides the paradox: not only does 6 appear false, the
conjunction of 6 and 3 is formally inconsistent with the SDL axiom OA / - O -
A. The overwhelming verdict among deontic logicians is that SDL genuinely
succumbs to the deontic operator deontic paradoxes 220 220 deontic paradoxes. But it is
controversial what other approach is best followed to resolve these puzzles. Two
of the most attractive proposals are Castañeda’s two-sorted system Thinking and
Doing, 1975, and the agent-and-time relativized approach of Fred Feldman
Philosophical Perspectives, 1990.
dependence, in
philosophy, a relation of one of three main types: epistemic dependence, or
dependence in the order of knowing; conceptual dependence, or dependence in the
order of understanding; and ontological dependence, or dependence in the order
of being. When a relation of dependence runs in one direction only, we have a
relation of priority. For example, if wholes are ontologically dependent on
their parts, but the latter in turn are not ontologically dependent on the
former, one may say that parts are ontologically prior to wholes. The phrase
‘logical priority’ usually refers to priority of one of the three varieties to
be discussed here. Epistemic dependence. To say that the facts in some class B
are epistemically dependent on the facts in some other class A is to say this:
one cannot know any fact in B unless one knows some fact in A that serves as one’s
evidence for the fact in B. For example, it might be held that to know any fact
about one’s physical environment e.g., that there is a fire in the stove, one
must know as evidence some facts about the character of one’s own sensory
experience e.g., that one is feeling warm and seeing flames. This would be to
maintain that facts about the physical world are epistemically dependent on
facts about sensory experience. If one held in addition that the dependence is
not reciprocal that one can know facts
about one’s sensory experience without knowing as evidence any facts about the
physical world one would be maintaining
that the former facts are epistemically prior to the latter facts. Other
plausible though sometimes disputed examples of epistemic priority are the
following: facts about the behavior of others are epistemically prior to facts
about their mental states; facts about observable objects are epistemically
prior to facts about the invisible particles postulated by physics; and
singular facts e.g., this crow is black are epistemically prior to general
facts e.g., all crows are black. Is there a class of facts on which all others
epistemically depend and that depend on no further facts in turn a bottom story in the edifice of knowledge?
Some foundationalists say yes, positing a level of basic or foundational facts
that are epistemically prior to all others. Empiricists are usually
foundationalists who maintain that the basic level consists of facts about
immediate sensory experience. Coherentists deny the need for a privileged
stratum of facts to ground the knowledge of all others; in effect, they deny
that any facts are epistemically prior to any others. Instead, all facts are on
a par, and each is known in virtue of the way in which it fits in with all the
rest. Sometimes it appears that two propositions or classes of them each
epistemically depend on the other in a vicious way to know A, you must first know B, and to know
B, you must first know A. Whenever this is genuinely the case, we are in a skeptical
predicament and cannot know either proposition. For example, Descartes believed
that he could not be assured of the reliability of his own cognitions until he
knew that God exists and is not a deceiver; yet how could he ever come to know
anything about God except by relying on his own cognitions? This is the famous
problem of the Cartesian circle. Another example is the problem of induction as
set forth by Hume: to know that induction is a legitimate mode of inference,
one would first have to know that the future will resemble the past; but since
the latter fact is establishable only by induction, one could know it only if
one already knew that induction is legitimate. Solutions to these problems must
show that contrary to first appearances, there is a way of knowing one of the
problematic propositions independently of the other. Conceptual dependence. To
say that B’s are conceptually dependent on A’s means that to understand what a
B is, you must understand what an A is, or that the concept of a B can be
explained or understood only through the concept of an A. For example, it could
plausibly be claimed that the concept uncle can be understood only in terms of
the concept male. Empiricists typically maintain that we understand what an
external thing like a tree or a table is only by knowing what experiences it
would induce in us, so that the concepts we apply to physical things depend on
the concepts we apply to our experideontological ethics dependence 221 221 ences. They typically also maintain that
this dependence is not reciprocal, so that experiential concepts are
conceptually prior to physical concepts. Some empiricists argue from the thesis
of conceptual priority just cited to the corresponding thesis of epistemic
priority that facts about experiences
are epistemically prior to facts about external objects. Turning the tables,
some foes of empiricism maintain that the conceptual priority is the other way
about: that we can describe and understand what kind of experience we are
undergoing only by specifying what kind of object typically causes it “it’s a
smell like that of pine mulch”. Sometimes they offer this as a reason for
denying that facts about experiences are epistemically prior to facts about
physical objects. Both sides in this dispute assume that a relation of
conceptual priority in one direction excludes a relation of epistemic priority
in the opposite direction. But why couldn’t it be the case both that facts
about experiences are epistemically prior to facts about physical objects and
that concepts of physical objects are conceptually prior to concepts of
experiences? How the various kinds of priority and dependence are connected
e.g., whether conceptual priority implies epistemic priority is a matter in
need of further study. Ontological dependence. To say that entities of one sort
the B’s are ontologically dependent on entities of another sort the A’s means
this: no B can exist unless some A exists; i.e., it is logically or
metaphysically necessary that if any B exists, some A also exists. Ontological
dependence may be either specific the existence of any B depending on the
existence of a particular A or generic the existence of any B depending merely
on the existence of some A or other. If B’s are ontologically dependent on A’s,
but not conversely, we may say that A’s are ontologically prior to B’s. The
traditional notion of substance is often defined in terms of ontological
priority substances can exist without
other things, as Aristotle said, but the others cannot exist without them.
Leibniz believed that composite entities are ontologically dependent on simple
i.e., partless entities that any
composite object exists only because it has certain simple elements that are
arranged in a certain way. Berkeley, J. S. Mill, and other phenomenalists have
believed that physical objects are ontologically dependent on sensory
experiences that the existence of a
table or a tree consists in the occurrence of sensory experiences in certain
orderly patterns. Spinoza believed that all finite beings are ontologically
dependent on God and that God is ontologically dependent on nothing further;
thus God, being ontologically prior to everything else, is in Spinoza’s view
the only substance. Sometimes there are disputes about the direction in which a
relationship of ontological priority runs. Some philosophers hold that
extensionless points are prior to extended solids, others that solids are prior
to points; some say that things are prior to events, others that events are
prior to things. In the face of such disagreement, still other philosophers
such as Goodman have suggested that nothing is inherently or absolutely prior
to anything else: A’s may be prior to B’s in one conceptual scheme, B’s to A’s
in another, and there may be no saying which scheme is correct. Whether
relationships of priority hold absolutely or only relative to conceptual
schemes is one issue dividing realists and anti-realists.
depiction, pictorial
representation, also sometimes called “iconic representation.” Linguistic
representation is conventional: it is only by virtue of a convention that the
word ‘cats’ refers to cats. A picture of a cat, however, seems to refer to cats
by other than conventional means; for viewers can correctly interpret pictures
without special training, whereas people need special training to learn
languages. Though some philosophers, such as Goodman Languages of Art, deny
that depiction involves a non-conventional element, most are concerned to give
an account of what this non-conventional element consists in. Some hold that it
consists in resemblance: pictures refer to their objects partly by resembling
them. Objections to this are that anything resembles anything else to some
degree; and that resemblance is a symmetric and reflexive relation, whereas depiction
is not. Other philosophers avoid direct appeal to resemblance: Richard Wollheim
Painting as an Art argues that depiction holds by virtue of the intentional
deployment of the natural human capacity to see objects in marked surfaces; and
dependence, causal depiction 222 222
Kendall Walton Mimesis as Make-Believe argues that depiction holds by virtue of
objects serving as props in reasonably rich and vivid visual games of
make-believe.
Derrida, Jacques b.1930,
French philosopher, author of deconstructionism, and leading figure in the
postmodern movement. Postmodern thought seeks to move beyond modernism by
revealing inconsistencies or aporias within the Western European tradition from
Descartes to the present. These aporias are largely associated with onto-theology,
a term coined by Heidegger to characterize a manner of thinking about being and
truth that ultimately grounds itself in a conception of divinity.
Deconstruction is the methodology of revelation: it typically involves seeking
out binary oppositions defined interdependently by mutual exclusion, such as
good and evil or true and false, which function as founding terms for modern
thought. The ontotheological metaphysics underlying modernism is a metaphysics
of presence: to be is to be present, finally to be absolutely present to the
absolute, that is, to the divinity whose own being is conceived as presence to
itself, as the coincidence of being and knowing in the Being that knows all
things and knows itself as the reason for the being of all that is. Divinity
thus functions as the measure of truth. The aporia here, revealed by
deconstruction, is that this modernist measure of truth cannot meet its own
measure: the coincidence of what is and what is known is an impossibility for
finite intellects. Major influences on Derrida include Hegel, Freud, Heidegger,
Sartre, Saussure, and structuralist thinkers such as Lévi-Strauss, but it was
his early critique of Husserl, in Introduction à “L’Origine de la géometrie” de
Husserl 1962, that gained him recognition as a critic of the phenomenological
tradition and set the conceptual framework for his later work. Derrida sought
to demonstrate that the origin of geometry, conceived by Husserl as the guiding
paradigm for Western thought, was a supratemporal ideal of perfect knowing that
serves as the goal of human knowledge. Thus the origin of geometry is
inseparable from its end or telos, a thought that Derrida later generalizes in
his deconstruction of the notion of origin as such. He argues that this ideal
cannot be realized in time, hence cannot be grounded in lived experience, hence
cannot meet the “principle of principles” Husserl designated as the prime
criterion for phenomenology, the principle that all knowing must ground itself
in consciousness of an object that is coincidentally conscious of itself. This
revelation of the aporia at the core of phenomenology in particular and Western
thought in general was not yet labeled as a deconstruction, but it established
the formal structure that guided Derrida’s later deconstructive revelations of
the metaphysics of presence underlying the modernism in which Western thought
culminates.
Descartes, René 15961650,
French philosopher and mathematician, a founder of the “modern age” and perhaps
the most important figure in the intellectual revolution of the seventeenth
century in which the traditional systems of understanding based on Aristotle
were challenged and, ultimately, overthrown. His conception of philosophy was
all-embracing: it encompassed mathematics and the physical sciences as well as
psychology and ethics, and it was based on what he claimed to be absolutely
firm and reliable metaphysical foundations. His approach to the problems of
knowledge, certainty, and the nature of the human mind played a major part in
shaping the subsequent development of philosophy. Life and works. Descartes was
born in a small town near Tours that now bears his name. He was brought up by
his maternal grandmother his mother having died soon after his birth, and at
the age of ten he was sent to the recently founded Jesuit of La Flèche in Anjou, where he remained as a
boarding pupil for nine years. At La Flèche he studied classical literature and
traditional classics-based subjects such as history and rhetoric as well as
natural philosophy based on the Aristotelian system and theology. He later
wrote of La Flèche that he considered it “one of the best schools in Europe,”
but that, as regards the philosophy he had learned there, he saw that “despite
being cultivated for many centuries by the best minds, it contained no point
which was not disputed and hence doubtful.” At age twenty-two having taken a
law degree de re Descartes, René 223
223 at Poitiers, Descartes set out on a series of travels in Europe,
“resolving,” as he later put it, “to seek no knowledge other than that which
could be found either in myself or the great book of the world.” The most
important influence of this early period was Descartes’s friendship with the
Dutchman Isaac Beeckman, who awakened his lifelong interest in mathematics a science in which he discerned precision and
certainty of the kind that truly merited the title of scientia Descartes’s term
for genuine systematic knowledge based on reliable principles. A considerable
portion of Descartes’s energies as a young man was devoted to pure mathematics:
his essay on Geometry published in 1637 incorporated results discovered during
the 1620s. But he also saw mathematics as the key to making progress in the
applied sciences; his earliest work, the Compendium Musicae, written in 1618
and dedicated to Beeckman, applied quantitative principles to the study of
musical harmony and dissonance. More generally, Descartes saw mathematics as a
kind of paradigm for all human understanding: “those long chains composed of
very simple and easy reasonings, which geometers customarily use to arrive at
their most difficult demonstrations, gave me occasion to suppose that all the
things which fall within the scope of human knowledge are interconnected in the
same way” Discourse on the Method, Part II. In the course of his travels,
Descartes found himself closeted, on November 10, 1619, in a “stove-heated
room” in a town in southern Germany, where after a day of intense meditation,
he had a series of vivid dreams that convinced him of his mission to found a
new scientific and philosophical system. After returning to Paris for a time,
he emigrated to Holland in 1628, where he was to live though with frequent
changes of address for most of the rest of his life. By 1633 he had ready a treatise
on cosmology and physics, Le Monde; but he cautiously withdrew the work from
publication when he heard of the condemnation of Galileo by the Inquisition for
rejecting as Descartes himself did the traditional geocentric theory of the
universe. But in 1637 Descartes released for publication, in French, a sample
of his scientific work: three essays entitled the Optics, Meteorology, and
Geometry. Prefaced to that selection was an autobiographical introduction
entitled Discourse on the Method of rightly conducting one’s reason and
reaching the truth in the sciences. This work, which includes discussion of a
number of scientific issues such as the circulation of the blood, contains in
Part IV a summary of Descartes’s views on knowledge, certainty, and the metaphysical
foundations of science. Criticisms of his arguments here led Descartes to
compose his philosophical masterpiece, the Meditations on First Philosophy,
published in Latin in 1641 a dramatic
account of the voyage of discovery from universal doubt to certainty of one’s
own existence, and the subsequent struggle to establish the existence of God,
the nature and existence of the external world, and the relation between mind
and body. The Meditations aroused enormous interest among Descartes’s contemporaries,
and six sets of objections by celebrated philosophers and theologians including
Mersenne, Hobbes, Arnauld, and Gassendi were published in the same volume as
the first edition a seventh set, by the Jesuit Pierre Bourdin, was included in
the second edition of 1642. A few years later, Descartes published, in Latin, a
mammoth compendium of his metaphysical and scientific views, the Principles of
Philosophy, which he hoped would become a
textbook to rival the standard texts based on Aristotle. In the later
1640s, Descartes became interested in questions of ethics and psychology,
partly as a result of acute questions about the implications of his system
raised by Princess Elizabeth of Bohemia in a long and fruitful correspondence.
The fruits of this interest were published in 1649 in a lengthy French treatise
entitled The Passions of the Soul. The same year, Descartes accepted after much
hesitation an invitation to go to Stockholm to give philosophical instruction
to Queen Christina of Sweden. He was required to provide tutorials at the royal
palace at five o’clock in the morning, and the strain of this break in his
habits he had maintained the lifelong custom of lying in bed late into the
morning led to his catching pneumonia. He died just short of his fifty-fourth
birthday. The Cartesian system. In a celebrated simile, Descartes described the
whole of philosophy as like a tree: the roots are metaphysics, the trunk
physics, and the branches are the various particular sciences, including
mechanics, medicine, and morals. The analogy captures at least three important
features of the Cartesian system. The first is its insistence on the essential
unity of knowledge, which contrasts strongly with the Aristotelian conception
of the sciences as a series of separate disciplines, each with its own methods
and standards of precision. The sciences, as Descartes put it in an early
notebook, are all “linked together” in a sequence that is in principle as
simple and straightforward as the series of numbers. The second point conveyed
by the tree simile is the utility of philosophy for ordinary living: the tree
is valued for its fruits, and these are gathered, Descartes points out, “not
from the roots or the trunk but from the ends of the branches” the practical sciences. Descartes frequently
stresses that his principal motivation is not abstract theorizing for its own
sake: in place of the “speculative philosophy taught in the Schools,” we can
and should achieve knowledge that is “useful in life” and that will one day
make us “masters and possessors of nature.” Third, the likening of metaphysics
or “first philosophy” to the roots of the tree nicely captures the Cartesian
belief in what has come to be known as foundationalism the view that knowledge must be constructed
from the bottom up, and that nothing can be taken as established until we have
gone back to first principles. Doubt and the foundations of belief. In
Descartes’s central work of metaphysics, the Meditations, he begins his
construction project by observing that many of the preconceived opinions he has
accepted since childhood have turned out to be unreliable; so it is necessary,
“once in a lifetime” to “demolish everything and start again, right from the
foundations.” Descartes proceeds, in other words, by applying what is sometimes
called his method of doubt, which is explained in the earlier Discourse on the
Method: “Since I now wished to devote myself solely to the search for truth, I
thought it necessary to . . . reject as if absolutely false everything in which
one could imagine the least doubt, in order to see if I was left believing
anything that was entirely indubitable.” In the Meditations we find this method
applied to produce a systematic critique of previous beliefs, as follows.
Anything based on the senses is potentially suspect, since “I have found by
experience that the senses sometimes deceive, and it is prudent never to trust
completely those who have deceived us even once.” Even such seemingly
straightforward judgments as “I am sitting here by the fire” may be false,
since there is no guarantee that my present experience is not a dream. The
dream argument as it has come to be called leaves intact the truths of
mathematics, since “whether I am awake or asleep two and three make five”; but
Descartes now proceeds to introduce an even more radical argument for doubt
based on the following dilemma. If there is an omnipotent God, he could
presumably cause me to go wrong every time I count two and three; if, on the
other hand, there is no God, then I owe my origins not to a powerful and
intelligent creator, but to some random series of imperfect causes, and in this
case there is even less reason to suppose that my basic intuitions about
mathematics are reliable. By the end of the First Meditation, Descartes finds
himself in a morass of wholesale doubt, which he dramatizes by introducing an
imaginary demon “of the utmost power and cunning” who is systematically
deceiving him in every possible way. Everything I believe in “the sky, the earth and all external things” might be illusions that the demon has devised
in order to trick me. Yet this very extremity of doubt, when pushed as far as
it will go, yields the first indubitable truth in the Cartesian quest for
knowledge the existence of the thinking
subject. “Let the demon deceive me as much as he may, he can never bring it
about that I am nothing, so long as I think I am something. . . . I am, I
exist, is certain, as often as it is put forward by me or conceived in the
mind.” Elsewhere, Descartes expresses this cogito argument in the famous phrase
“Cogito ergo sum” “I am thinking, therefore I exist”. Having established his
own existence, Descartes proceeds in the Third Meditation to make an inventory
of the ideas he finds within him, among which he identifies the idea of a
supremely perfect being. In a much criticized causal argument he reasons that
the representational content or “objective reality” of this idea is so great
that it cannot have originated from inside his own imperfect mind, but must
have been planted in him by an actual perfect being God. The importance of God in the Cartesian
system can scarcely be overstressed. Once the deity’s existence is established,
Descartes can proceed to reinstate his belief in the world around him: since
God is perfect, and hence would not systematically deceive, the strong
propensity he has given us to believe that many of our ideas come from external
objects must, in general, be sound; and hence the external world exists Sixth
Meditation. More important still, Descartes uses the deity to set up a reliable
method for the pursuit of truth. Human beings, since they are finite and
imperfect, often go wrong; in particular, the data supplied by the senses is
often, as Descartes puts it, “obscure and confused.” But each of us can
nonetheless avoid error, provided we remember to withhold judgment in such
doubtful cases and confine ourselves to the “clear and distinct” perceptions of
the pure intellect. A reliable intellect was God’s gift to man, and if we use
it with the greatest posDescartes, René Descartes, René 225 225 sible care, we can be sure of avoiding
error Fourth Meditation. In this central part of his philosophy, Descartes
follows in a long tradition going back to Augustine with its ultimate roots in
Plato that in the first place is skeptical about the evidence of the senses as
against the more reliable abstract perceptions of the intellect, and in the
second place sees such intellectual knowledge as a kind of illumination derived
from a higher source than man’s own mind. Descartes frequently uses the ancient
metaphor of the “natural light” or “light of reason” to convey this notion that
the fundamental intuitions of the intellect are inherently reliable. The label
‘rationalist’, which is often applied to Descartes in this connection, can be
misleading, since he certainly does not rely on reason alone: in the
development of his scientific theories he allows a considerable role to
empirical observation in the testing of hypotheses and in the understanding of
the mechanisms of nature his “vortex theory” of planetary revolutions is based
on observations of the behavior of whirlpools. What is true, nonetheless, is
that the fundamental building blocks of Cartesian science are the innate ideas
chiefly those of mathematics whose reliability Descartes takes as guaranteed by
their having been implanted in the mind by God. But this in turn gives rise to
a major problem for the Cartesian system, which was first underlined by some of
Descartes’s contemporaries notably Mersenne and Arnauld, and which has come to
be known as the Cartesian circle. If the reliability of the clear and distinct
perceptions of the intellect depends on our knowledge of God, then how can that
knowledge be established in the first place? If the answer is that we can prove
God’s existence from premises that we clearly and distinctly perceive, then
this seems circular; for how are we entitled, at this stage, to assume that our
clear and distinct perceptions are reliable? Descartes’s attempts to deal with
this problem are not entirely satisfactory, but his general answer seems to be
that there are some propositions that are so simple and transparent that, so
long as we focus on them, we can be sure of their truth even without a divine
guarantee. Cartesian science and dualism. The scientific system that Descartes
had worked on before he wrote the Meditations and that he elaborated in his
later work, the Principles of Philosophy, attempts wherever possible to reduce
natural phenomena to the quantitative descriptions of arithmetic and geometry:
“my consideration of matter in corporeal things,” he says in the Principles,
“involves absolutely nothing apart from divisions, shapes and motions.” This
connects with his metaphysical commitment to relying only on clear and distinct
ideas. In place of the elaborate apparatus of the Scholastics, with its
plethora of “substantial forms” and “real qualities,” Descartes proposes to
mathematicize science. The material world is simply an indefinite series of
variations in the shape, size, and motion of the single, simple, homogeneous
matter that he terms res extensa “extended substance”. Under this category he
includes all physical and biological events, even complex animal behavior,
which he regards as simply the result of purely mechanical processes for
non-human animals as mechanical automata, see Discourse, Part V. But there is
one class of phenomena that cannot, on Descartes’s view, be handled in this
way, namely conscious experience. Thought, he frequently asserts, is completely
alien to, and incompatible with, extension: it occupies no space, is unextended
and indivisible. Hence Descartes puts forward a dualistic theory of substance:
in addition to the res extensa that makes up the material universe, there is
res cogitans, or thinking substance, which is entirely independent of matter.
And each conscious individual is a unique thinking substance: “This ‘I’ that is, the soul, by which I am what I am,
is entirely distinct from the body, and would not fail to be what it is even if
the body did not exist.” Descartes’s arguments for the incorporeality of the
soul were challenged by his contemporaries and have been heavily criticized by
subsequent commentators. In the Discourse and the Second Meditation, he lays
great stress on his ability to form a conception of himself as an existing
subject, while at the same time doubting the existence of any physical thing;
but this, as the critics pointed out, seems inadequate to establish the
conclusion that he is a res cogitans a
being whose whole essence consists simply in thought. I may be able to imagine
myself without a body, but this hardly proves that I could in reality exist
without one see further the Synopsis to the Meditations. A further problem is
that our everyday experience testifies to the fact that we are not incorporeal
beings, but very much creatures of flesh and blood. “Nature teaches me by the
sensations of pain, hunger, thirst and so on,” Descartes admits in the Sixth
Meditation, “that I am not merely present in my body as a sailor is present in
a ship, but that I am very closely Descartes, René Descartes, René 226 226 joined and as it were intermingled with
it.” Yet how can an incorporeal soul interact with the body in this way? In his
later writings, Descartes speaks of the “union of soul and body” as a
“primitive notion” see letters to Elizabeth of May 21 and June 28, 1643; by
this he seems to have meant that, just as there are properties such as length
that belong to body alone, and properties such as understanding that belong to mind alone, so there are items
such as sensations that are irreducibly psychophysical, and that belong to me
insofar as I am an embodied consciousness. The explanation of such
psychophysical events was the task Descartes set himself in his last work, The
Passions of the Soul; here he developed his theory that the pineal gland in the
brain was the “seat of the soul,” where data from the senses were received via
the nervous system, and where bodily movements were initiated. But despite the
wealth of physiological detail Descartes provides, the central philosophical
problems associated with his dualistic account of humans as hybrid entities
made up of physical body and immaterial soul are, by common consent, not
properly sorted out. Influence. Despite the philosophical difficulties that
beset the Cartesian system, Descartes’s vision of a unified understanding of
reality has retained a powerful hold on scientists and philosophers ever since.
His insistence that the path to progress in science lay in the direction of
quantitative explanations has been substantially vindicated. His attempt to
construct a system of knowledge by starting from the subjective awareness of
the conscious self has been equally important, if only because so much of the
epistemology of our own time has been a reaction against the autocentric
perspective from which Descartes starts out. As for the Cartesian theory of the
mind, it is probably fair to say that the dualistic approach is now widely
regarded as raising more problems than it solves. But Descartes’s insistence
that the phenomena of conscious experience are recalcitrant to explanation in
purely physical terms remains deeply influential, and the cluster of profound
problems that he raised about the nature of the human mind and its relation to
the material world are still very far from being adequately resolved.
descriptivism, the thesis
that the meaning of any evaluative statement is purely descriptive or factual,
i.e., determined, apart from its syntactical features, entirely by its truth
conditions. Nondescriptivism of which emotivism and prescriptivism are the main
varieties is the view that the meaning of full-blooded evaluative statements is
such that they necessarily express the speaker’s sentiments or commitments.
Nonnaturalism, naturalism, and supernaturalism are descriptivist views about
the nature of the properties to which the meaning rules refer. Descriptivism is
related to cognitivism and moral realism.
determinable, a general
characteristic or property analogous to a genus except that while a property
independent of a genus differentiates a species that falls under the genus, no
such independent property differentiates a determinate that falls under the
determinable. The color blue, e.g., is a determinate with respect of the
determinable color: there is no property F independent of color such that a
color is blue if and only if it is F. In contrast, there is a property, having
equal sides, such that a rectangle is a square if and only if it has this
property. Square is a properly differentiated species of the genus rectangle.
W. E. Johnson introduces the terms ‘determinate’ and ‘determinable’ in his
Logic, Part I, Chapter 11. His account of this distinction does not closely
resemble the current understanding sketched above. Johnson wants to explain the
differences between the superficially similar ‘Red is a color’ and ‘Plato is a
man’. He concludes that the latter really predicates something, humanity, of
Plato; while the former does not really predicate anything of red. Color is not
really a property or adjective, as Johnson puts it. The determinates red, blue,
and yellow are grouped together not because of a property they have in common
but because of the ways they differ from each other. Determinates under the
same determinable are related to each other and are thus comparable in ways in
which they are not related to determinates under other determinables.
Determinates belonging to different determinables, such as color and shape, are
incomparable. ’More determinate’ is often used interchangeably with ‘more
specific’. Many philosophers, including Johnson, hold that the characters of
things are absolutely determinate or specific. Spelling out what this claim
means leads to another problem in analyzing the relation between determinate
and determinable. By what principle can we exclude red and round as a
determinate of red and red as a determinate of red or round?
determinism, the view
that every event or state of affairs is brought about by antecedent events or
states of affairs in accordance with universal causal laws that govern the
world. Thus, the state of the world at any instant determines a unique future,
and that knowledge of all the positions of things and the prevailing natural
forces would permit an intelligence to predict the future state of the world
with absolute precision. This view was advanced by Laplace in the early
nineteenth century; he was inspired by Newton’s success at integrating our
physical knowledge of the world. Contemporary determinists do not believe that
Newtonian physics is the supreme theory. Some do not even believe that all
theories will someday be integrated into a unified theory. They do believe
that, for each event, no matter how precisely described, there is some theory
or system of laws such that the occurrence of that event under that description
is derivable from those laws together with information about the prior state of
the system. Some determinists formulate the doctrine somewhat differently: a
every event has a sufficient cause; b at any given time, given the past, only
one future is possible; c given knowledge of all antecedent conditions and all
laws of nature, an agent could predict at any given time the precise subsequent
history of the universe. Thus, determinists deny the existence of chance,
although they concede that our ignorance of the laws or all relevant antecedent
conditions makes certain events unexpected and, therefore, apparently happen
“by chance.” The term ‘determinism’ is also used in a more general way as the name
for any metaphysical doctrine implying that there is only one possible history
of the world. The doctrine described above is really scientific or causal
determinism, for it grounds this implication on a general fact about the
natural order, namely, its governance by universal causal law. But there is
also theological determinism, which holds that God determines everything that
happens or that, since God has perfect knowledge about the universe, only the
course of events that he knows will happen can happen. And there is logical
determinism, which grounds the necessity of the historical order on the logical
truth that all propositions, including ones about the future, are either true
or false. Fatalism, the view that there are forces e.g., the stars or the fates
that determine all outcomes independently of human efforts or wishes, is
claimed by some to be a version of determinism. But others deny this on the
ground that determinists do not reject the efficacy of human effort or desire;
they simply believe that efforts and desires, which are sometimes effective,
are themselves determined by antecedent factors as in a causal chain of events.
destructive dilemma determinism 228 228
Since determinism is a universal doctrine, it embraces human actions and choices.
But if actions and choices are determined, then some conclude that free will is
an illusion. For the action or choice is an inevitable product of antecedent
factors that rendered alternatives impossible, even if the agent had
deliberated about options. An omniscient agent could have predicted the action
or choice beforehand. This conflict generates the problem of free will and
determinism.
Dewey, John 18591952,
American philosopher, social critic, and theorist of education. During an era
when philosophy was becoming thoroughly professionalized, Dewey remained a
public philosopher having a profound international influence on politics and
education. His career began inauspiciously in his student days at the of Vermont and then as a high school teacher
before he went on to study philosophy at the newly formed Johns Hopkins . There
he studied with Peirce, G. S. Hall, and G. S. Morris, and was profoundly
influenced by the version of Hegelian idealism propounded by Morris. After
receiving his doctorate in 1884, Dewey moved to the of Michigan where he rejoined Morris, who had
relocated there. At Michigan he had as a colleague the young social
psychologist G. H. Mead, and during this period Dewey himself concentrated his
writing in the general area of psychology. In 1894 he accepted an appointment
as chair of the Department of Philosophy, Psychology, and Education at the of Chicago, bringing Mead with him. At
Chicago Dewey was instrumental in founding the famous laboratory school, and
some of his most important writings on education grew out of his work in that
experimental school. In 1904 he left Chicago for Columbia , where he joined F.
J. E. Woodbridge, founder of The Journal of Philosophy. He retired from
Columbia in 1930 but remained active in both philosophy and public affairs
until his death in 1952. Over his long career he was a prolific speaker and
writer, as evidenced by a literary output of forty books and over seven hundred
articles. Philosophy. At the highest level of generality Dewey’s philosophical
orientation can be characterized as a kind of naturalistic empiricism, and the
two most fundamental notions in his philosophy can be gleaned from the title of
his most substantial book, Experience and Nature 1925. His concept of
experience had its origin in his Hegelian background, but Dewey divested it of
most of its speculative excesses. He clearly conceived of himself as an
empiricist but was careful to distinguish his notion of experience both from
that of the idealist tradition and from the empiricism of the classical British
variety. The idealists had so stressed the cognitive dimension of experience
that they overlooked the non-cognitive, whereas he saw the British variety as
inappropriately atomistic and subjectivist. In contrast to these Dewey fashioned
a notion of experience wherein action, enjoyment, and what he called
“undergoing” were integrated and equally fundamental. The felt immediacy of
experience what he generally characterized as its aesthetic quality was basic
and irreducible. He then situated cognitive experience against this broader
background as arising from and conditioned by this more basic experience.
Cognitive experience was the result of inquiry, which was viewed as a process
arising from a felt difficulty within our experience, proceeding through the
stage of conceptual elaboration of possible resolutions, to a final
reconstruction of the experience wherein the initial fragmented situation is
transformed into a unified whole. Cognitive inquiry is this mediating process
from experience to experience, and knowledge is what makes possible the final
more integrated experience, which Dewey termed a “consummation.” On this view
knowing is a kind of doing, and the criterion of knowledge is “warranted
assertability.” On the first point, Dewey felt that one of the cardinal errors
of philosophy from Plato to determinism, hard Dewey, John 229 229 the modern period was what he called
“the spectator theory of knowledge.” Knowledge had been viewed as a kind of
passive recording of facts in the world and success was seen as a matter of the
correspondence of our beliefs to these antecedent facts. To the contrary, Dewey
viewed knowing as a constructive conceptual activity that anticipated and
guided our adjustment to future experiential interactions with our environment.
It was with this constructive and purposive view of thinking in mind that Dewey
dubbed his general philosophical orientation instrumentalism. Concepts are
instruments for dealing with our experienced world. The fundamental categories
of knowledge are to be functionally understood, and the classical dualisms of
philosophy mindbody, meansend, fact value are ultimately to be overcome. The
purpose of knowing is to effect some alteration in the experiential situation,
and for this purpose some cognitive proposals are more effective than others.
This is the context in which “truth” is normally invoked, and in its stead
Dewey proposed “warranted assertability.” He eschewed the notion of truth even
in its less dangerous adjectival and adverbial forms, ‘true’ and ‘truly’
because he saw it as too suggestive of a static and finalized correspondence
between two separate orders. Successful cognition was really a more dynamic
matter of a present resolution of a problematic situation resulting in a reconstructed
experience or consummation. “Warranted assertability” was the success
characterization, having the appropriately normative connotation without the
excess metaphysical baggage. Dewey’s notion of experience is intimately tied to
his notion of nature. He did not conceive of nature as
“the-world-as-it-would-be-independent-of-human-experience” but rather as a
developing system of natural transactions admitting of a tripartite distinction
between the physicochemical level, the psychophysical level, and the level of
human experience with the understanding that this categorization was not to be
construed as implying any sharp discontinuities. Experience itself, then, is
one of the levels of transaction in nature and is not reducible to the other
forms. The more austere, “scientific” representations of nature as, e.g., a
purely mechanical system, Dewey construed as merely useful conceptualizations
for specific cognitive purposes. This enabled him to distinguish his
“naturalism,” which he saw as a kind of nonreductive empiricism, from
“materialism,” which he saw as a kind of reductive rationalism. Dewey and
Santayana had an ongoing dialogue on precisely this point. Dewey’s view was
also naturalistic to the degree that it advocated the universal scope of scientific
method. Influenced in this regard by Peirce, he saw scientific method not as
restricted to a specific sphere but simply as the way we ought to think. The
structure of all reflective thought is future-oriented and involves a movement
from the recognition and articulation of a felt difficulty, through the
elaboration of hypotheses as possible resolutions of the difficulty, to the
stage of verification or falsification. The specific sciences physics, biology,
psychology investigate the different levels of transactions in nature, but the
scientific manner of investigation is simply a generalized sophistication of
the structure of common sense and has no intrinsic restriction. Dewey construed
nature as an organic unity not marked by any radical discontinuities that would
require the introduction of non-natural categories or new methodological
strategies. The sharp dualisms of mind and body, the individual and the social,
the secular and the religious, and most importantly, fact and value, he viewed
as conceptual constructs that have far outlived their usefulness. The inherited
dualisms had to be overcome, particularly the one between fact and value
inasmuch as it functioned to block the use of reason as the guide for human
action. On his view people naturally have values as well as beliefs. Given
human nature, there are certain activities and states of affairs that we
naturally prize, enjoy, and value. The human problem is that these are not
always easy to come by nor are they always compatible. We are forced to deal
with the problem of what we really want and what we ought to pursue. Dewey
advocated the extension of scientific method to these domains. The deliberative
process culminating in a practical judgment is not unlike the deliberative
process culminating in factual belief. Both kinds of judgment can be
responsible or irresponsible, right or wrong. This deliberative sense of
evaluation as a process presupposes the more basic sense of evaluation
concerning those dimensions of human experience we prize and find fulfilling.
Here too there is a dimension of appropriateness, one grounded in the kind of
beings we are, where the ‘we’ includes our social history and development. On
this issue Dewey had a very Grecian view, albeit one transposed into a modern
evolutionary perspective. Fundamental questions of value and human fulfillment
ultimately bear on our conception of the human commuDewey, John Dewey, John
230 230 nity, and this in turn leads
him to the issues of democracy and education. Society and education. The ideal
social order for Dewey is a structure that allows maximum selfdevelopment of
all individuals. It fosters the free exchange of ideas and decides on policies
in a manner that acknowledges each person’s capacity effectively to participate
in and contribute to the direction of social life. The respect accorded to the
dignity of each contributes to the common welfare of all. Dewey found the
closest approximation to this ideal in democracy, but he did not identify
contemporary democracies with this ideal. He was not content to employ old
forms of democracy to deal with new problems. Consistent with instrumentalism,
he maintained that we should be constantly rethinking and reworking our
democratic institutions in order to make them ever more responsive to changing
times. This constant rethinking placed a considerable premium on intelligence,
and this underscored the importance of education for democracy. Dewey is
probably best known for his views on education, but the centrality of his
theory of education to his overall philosophy is not always appreciated. The
fundamental aim of education for him is not to convey information but to
develop critical methods of thought. Education is future-oriented and the
future is uncertain; hence, it is paramount to develop those habits of mind
that enable us adequately to assess new situations and to formulate strategies
for dealing with the problematic dimensions of them. This is not to suggest
that we should turn our backs on the past, because what we as a people have
already learned provides our only guide for future activity. But the past is
not to be valued for its own sake but for its role in developing and guiding
those critical capacities that will enable us to deal with our ever-changing
world effectively and responsibly. With the advent of the analytic tradition as
the dominant style of philosophizing in America, Dewey’s thought fell out of
favor. About the only arenas in which it continued to flourish were schools of
education. However, with the recent revival of a general pragmatic orientation
in the persons of Quine, Putnam, and Rorty, among others, the spirit of Dewey’s
philosophy is frequently invoked. Holism, anti-foundationalism, contextualism,
functionalism, the blurring of the lines between science and philosophy and
between the theoretical and the practical
all central themes in Dewey’s philosophy
have become fashionable. Neo-pragmatism is a contemporary catchphrase.
Dewey is, however, more frequently invoked than read, and even the Dewey that
is invoked is a truncated version of the historical figure who constructed a
comprehensive philosophical vision.
dharma, in Hinduism and
especially in the early literature of the Vedas, a cosmic rule giving things
their nature or essence, or in the human context, a set of duties and rules to
be performed or followed to maintain social order, promote general well-being,
and be righteous. Pursuit of dharma was considered one of the four fundamental
pursuits of life, the three others being those of wealth artha, pleasure kama,
and spiritual liberation moksha. In the Bhagavad Gita, dharma was made famous
as svadharma, meaning one’s assigned duties based on one’s nature and abilities
rather than on birth. The Hindu lawgiver Manu who probably lived between the
third century B.C. and the first century A.D. codified the dharmic duties based
on a fourfold order of society and provided concrete guidance to people in
discharging their social obligations based on their roles and stations in life.
Even though Manu, like the Gita, held that one’s duties and obligations should
fit one’s nature rather than be determined by birth, the dharma-oriented Hindu
society was eventually characterized by a rigid caste structure and a limited
role for women.
Dharmakirti seventh
century A.D., Indian Yogacara Buddhist philosopher and logician. His works
include Pramanavarttika “Explanation of the Touchstones”, a major work in logic
and epistemology; and Nyayabindu, an introduction to his views. In
Santanantara-siddhi “Establishment of the Existence of Other Minds” he defends
his perceptual idealism against the charge of solipsism, claiming that he may
as legitimately use the argument from analogy for the existence of others
drawing inferences from apparently intelligent behaviors to intelligences that
cause them as his perceptual realist opponents. He criticized Nyaya theistic
arguments. He exercised a strong influence on later Indian work in logic.
K.E.Y. d’Holbach, Paul-Henri-Dietrich, Baron 1723 89, French philosopher, a
leading materialist and prolific contributor to the Encyclopedia. He dharma
d’Holbach, Paul-Henri-Dietrich 231 231
was born in the Rhenish Palatinate, settled in France at an early age, and read
law at Leiden. After inheriting an uncle’s wealth and title, he became a solicitor
at the Paris “Parlement” and a regular host of philosophical dinners attended
by the Encyclopedists and visitors of renown Gibbon, Hume, Smith, Sterne,
Priestley, Beccaria, Franklin. Knowledgeable in chemistry and mineralogy and
fluent in several languages, he translated German scientific works and English
anti-Christian pamphlets into French. Basically, d’Holbach was a synthetic
thinker, powerful though not original, who systematized and radicalized
Diderot’s naturalism. Also drawing on Hobbes, Spinoza, Locke, Hume, Buffon,
Helvétius, and La Mettrie, his treatises were so irreligious and anticlerical
that they were published abroad anonymously or pseudonymously: Christianity
Unveiled 1756, The Sacred Contagion 1768, Critical History of Jesus 1770, The
Social System 1773, and Universal Moral 1776. His masterpiece, the System of
Nature 1770, a “Lucretian” compendium of eighteenth-century materialism, even
shocked Voltaire. D’Holbach derived everything from matter and motion, and
upheld universal necessity. The self-sustaining laws of nature are normative.
Material reality is therefore contrasted to metaphysical delusion,
self-interest to alienation, and earthly happiness to otherworldly optimism.
More vindictive than Toland’s, d’Holbach’s unmitigated critique of Christianity
anticipated Feuerbach, Strauss, Marx, and Nietzsche. He discredited
supernatural revelation, theism, deism, and pantheism as mythological, censured
Christian virtues as unnatural, branded piety as fanatical, and stigmatized
clerical ignorance, immorality, and despotism. Assuming that science liberates
man from religious hegemony, he advocated sensory and experimental knowledge.
Believing that society and education form man, he unfolded a mechanistic
anthropology, a eudaimonistic morality, and a secular, utilitarian social and
political program.
diagonal procedure, a
method, originated by Cantor, for showing that there are infinite sets that
cannot be put in one-to-one correspondence with the set of natural numbers
i.e., enumerated. For example, the method can be used to show that the set of
real numbers x in the interval 0 ‹ x m 1 is not enumerable. Suppose x0, x1, x2,
. . . were such an enumeration x0 is the real correlated with 0; x1, the real
correlated with 1; and so on. Then consider the list formed by replacing each
real in the enumeration with the unique non-terminating decimal fraction
representing it: The first decimal fraction represents x0; the second, x1; and
so on. By diagonalization we select the decimal fraction shown by the arrows:
and change each digit xnn, taking care to avoid a terminating decimal. This
fraction is not on our list. For it differs from the first in the tenths place,
from the second in the hundredths place, and from the third in the thousandths
place, and so on. Thus the real it represents is not in the supposed
enumeration. This contradicts the original assumption. The idea can be put more
elegantly. Let f be any function such that, for each natural number n, fn is a
set of natural numbers. Then there is a set S of natural numbers such that n 1
S S n 2 fn. It is obvious that, for each n, fn & S.
No comments:
Post a Comment