Thursday, April 9, 2020
Grice's Anti-Ryleism
objection
is appropriately labeled Ryleism. Here, it is maintained
that to say machines think is a category mistake, a violation
of ordinary usage. Let us take Ryleism first.
Can machines think or feel? Would we ever be inclined to say of a machine that it thought or felt? Is it
logically possible for a machine to think? This last question draws attention to the critical issue. If I say,
"Machines can't think," and I mean by this that nothing could
count as evidence that a machine thought, I have, by a semantic decision, rendered it impossible for the expression
"the machine thought . . ." to function in our language. It
is a violation of protocol to use "machine" and "thought" in
this way. It is a violation of a formation rule— which can
now be called a rule since it has been made explicit. The
I do not mean to imply that GSdel himself ever gave
any thought to what is discussed here; only that what his
proof seems to imply has been used by others to formulate a
position. See J. R. Lucas, "Minds, Machines and Godel,"
Minds and Machines, ed. Alan Ross Anderson (Englewood Cliffs;
Prentice-Hall, 19&4), pp. ’+3-59*
1+1
expression, "The machine thought that it was in love but
didn't feel that way," is, after our rule has been explicitly posited, not a well-formed formula, and this is why
we can say that it is logically impossible for machines to
think.
But what if no such rule has been set forth?
Doesn't the use of "machine" in the expression, "The machine
thought," seem strange anyway? We might say here that this
expression violates customary usage, but how do we determine
what customary usage is? "It seems jto me that this expression
violates usage." Doesn't this statement seem a little arbitrary and self-centered?
I could also make the statement, "Russians do not
think," into a "necessarily" true proposition. . I simply decide to use some other expression, or expressions, besides
"think" when I talk about Russians. Where someone else
might say, "Tolstoy thought that he had won the game," I
might say, "Tolstoy exhibited a positive anticipatory response set at one point in the game," or some other such
expression. Does it not seem strange to use two different
sets of expressions with respect to Russians and other
people? But— hold on— who said Russians are people? If I
were to adopt the procedure of not using "thought" with respect to Russians, I might feel some inclination to alter
the usage of other terms when I talk about Russians.
^2
When you say, "Machines can't think,” exactly what
are you telling me? Are you saying something about machines
or something about the way you use language? If it turns out
that nothing can count as evidence against the statement,
then you have simply told me something about your language
habits. Sometimes it takes a little while to find out if
someone is "telling” or "showing"— telling us something
about the world or showing us something about himself.
If someone says, for example, that "widespread, unprovoked violence is not exhibited in any society," we
might, at first, think he is saying something about society.
But upon reflection, we discover that we, ourselves, would
not call any group of people a society if there were widespread, unprovoked violence exhibited. We find that our
speaker has not only told us how he uses words, but that we
agree with him in that usage. There is, then, basic agreement at the pragmatic level where usage is established informally. We can also agree to prescribe a given usage.
This is agreement at the semantic level. This usage we take
as saying something about machines. If we do not agree with
this usage, we say it tells us something about the speaker.
In point of fact, however, in this particular case, we have
some feelings of puzzlement. For a long time, we have used
the word "think" with respect to certain forms of complex
behavior exhibited by people. It just happens that no
other things around us exhibit these complex behaviors. We
^3
might say that a dog "thought,” but even here we seem
puzzled. Is asking, "Can a dog think?" a question about
dogs or about linguistic usage? At any rate, we have
usually reserved the term "thought" for people.
Until the present time, there has been little trouble
with the use of the word "thought," since no other organism
except man exhibited the complex behavior that we refer to
when we say so and so thinks. But now machines have begun
to exhibit complex behavior which makes some people want to
say they think. The trouble is "our linguistic habits are
out of joint." A distinction is called for where one was
heretofore not needed. In James' example of the men who
argued over whether or not the man goes round the squirrel,
we have something similar.^ Once James showed that, while
we normally do not have to distinguish between circumnavigating an object and being at different times oriented
toward its different sides, we see that here a distinction
is called for. Two different usages which have in the past
accompanied each other have to be separated.
I have so far pointed out that; (1) We can, by a
semantic decision, make it logically impossible for a machine to think. (2) I have also indicated that we would
not feel too uncomfortable with this decision because our
actual linguistic habits have not been offended— we have not
^William James, "What Pragmatism Means," Essays in
Pragmatism, ed. Alburey Castell (New York, London: Hafner
Publishing Co., 1966), pp. 1^-1-58.
in the past used "think" and "machine” together. (3) I have
further indicated that it would seem odd to us if someone
said that "Russians don't think" and have indicated that
this is due to established language habits. (^) I have said
that in the case of dogs, as with recent machines, we waver.
(5) Finally, I have indicated that we do waver in our usage
because the kinds of behavior exhibited by dogs, and some
complex machines, overlap the behavior range of man— at
least, to some extent. In connection with this, I have said
that James' squirrel problem can help us to see that the
overlapping of spheres of usage is the cause of our trouble.
Now what if the range of a machine's behavior exactly overlapped man's?
Suppose we could construct a mechanical man whose
skin, eyes, etc., were indistinguishable from a man's. Suppose further that he had all the facial expressions right—
in short, that we could not in any way distinguish this
machine from a man either by behavior or appearance. I have
no doubt that we would say the "man(?)" thought, and even if
we knew that it was mechanical, in time we would either
say, "Of course machines think," or possibly "Some machines
are men," which is another possible linguistic adjustment
we could make. Or suppose that we were to remove most of a
man's cortex and install radio control equipment wired directly to the sensory and motor nerves of the brain. If
this equipment were in contact with a big enough computer.
^5
say one the size of a medium-sized house, we could probably
get people to say that the individual so controlled thought,
but they would probably abandon this usage if the situation
were explained.
It seems that what we ^ say about "thinking" or,
indeed, most any word, depends upon customary usage and that
what we might say depends on what definitions, semantic decisions, we make. One wants to say that at least the law of
contradiction stands firm— of course it does: it is a
formation rule of logic. "One cannot 'logically* hold contradictory opinions." Of course not— this is precisely the
point. Contrast the law of contradiction with the "necessarily" true hypothetical statement, "If A is the cause of B
then A precedes B." This statement "feels" just as true to
me as the law of contradiction.
We can definitively undermine Rylean objections to
the expression "machines think." First, we note that while
conformity to ordinary usage is necessary for communication,
there is also embodied in ordinary linguistic usage a procedure for changing some one or more features of usage. If
I say, "Let us henceforth use the term 'think' in conjunction with machine," then if people agree to this usage there
is no possibility of a Rylean objection, for the expression
now conforms to ordinary usage. And, there is no need to
justify, in general, the process of altering customary
^6
usage; it is simply a fact that this procedure is used, and
furthermore, customarily used.
Alternately, what if I say, "let us think of men as
machines," and other people agree to this usage. Now the
Ryle people will have a difficult decision to make. Either
"think" will no longer be legitimately used with "man" and,
for all practical purposes, have no use whatever, or
"machines think" will become a legitimate expression.
As I have already said, arguments against the legitimacy of the statement "the machine thinks," which are
associated with Godel’s proof, are arguments designed to
show that a machine cannot think what is not programmed
into it or that human thought is essentially different from
machine thought. It is my belief that the subsequently presented argument is one big category mistake. The argument
runs like this:
I. Any deductive system which is both consistent
and rich enough to produce arithmetic contains at least one
formula not provable in the system. Nevertheless, we can
see that these formulae are true.
II. It is maintained that any machine which is
complex enough to be suspected of thought is an instantiation of a formal deductive system, obviously containing
simple arithmetic since it can do simple arithmetic. It
follows that since we can see that some formulae, unprovable
in such a system, are obviously true, and since the machine
^7
is no more comprehensive than its deductive system, human
thought is essentially different from anything a machine
might do.
In the first place, I will point out that even if we
accept this characterization of a machine, this "essential"
difference is something very trivial. What we can contemplate, in all its splendor (and which the poor machine
never can), is the formula which says, "This formula is unprovable in this system."
What is of much more importance, however, is the
fact that this conception of a machine is completely wrong.
The conception of the machine held by the Godelians is that
it is completely determined in its operations by a set of
rules and that what it does will always be something definite and discrete because anything indefinite would not be
machine-like. Well, is this not the very model of a machine
which I have been holding up for you all along? How, you
may well ask, can I escape the conclusions so rigorously
deduced from our conception of a machine and Godel's
theorem?
It is fortunate, indeed, that I am defending cybernetics as a metaphysical principle; only this makes a
defense possible. Suppose you are playing chess and you,
with black, are in the following position:
1+8
Black
f
P
Kt
Kt
©
p P P ©
R K
White
(Black pieces are circled)
(Neither the white king nor rook has moved.)
Suppose your opponent offers to bet you five hundred dollars
that he can checkmate you in two moves, and you, after checking the position, accept. Later, as you are writing out your
check for five hundred dollars, you might begin to have some
valuable thoughts. After all, it is hoped that you will get
something for your m o n e y . •
Or, suppose that someone offers to bet you two
thousand dollars against one thousand that he can beat a
"fair" roulette wheel. You agree that he is to play the
wheel for one hour, and at the end of that time if he is
ahead you pay; otherwise, he pays. Are you smart if you accept this bet? A little knowledge is a dangerous thing.
\lhite plays: pawn to king eight, rook; black any;
white castles (using rook on king eight), checkmate.
^9
Experiences of this type can lead one, after a time,
to construct models of a different type. You include in
your model of the situation the possibility that there is
something you have not seen. You learn caution— like the
Greeks with their possible "God unknown," you consider the
possible "state" or "variable unknown."
While I abstract models from situations, the point
of the above discussion is that the model may be incomplete.
I have already explained elsewhere how our cybernetics model
of man makes this possible. Even if I built a machine constructed to operate according to certain rules, still I include in my model the possibility of the unexpected. Suppose
the machine surprises me— well, then I will construct another
model of it, a different abstraction. •
We act the same when dealing with a man or a
mechanism. We construct a model, but our model, if we have
much experience with life, also has room for the unexpected;
we realize that we may have to modify our model. My world is
in every case my world— it is as I see it. This is all very
true, but my world is a very contingent world. Although I
am certain of my own ideas, as my ideas, I am also certain
that the possibility exists that I will not have the same
ideas tomorrow; this is an undeniable fact. But then, isn't
Husserl right after all? Maybe we do intuit essences which
are there for everyone. How can we entertain the notion
that someone else abstracts a different model from a given
50
situation? Can we imagine his seeing the situation in some
way in which we do not see it? We must say "yes" and "no"
here.
Any definite model which we can attribute to someone
else must, of course, be one we possess; but we learn, if we
live long enough, to expect "our world" to change. It is an
undeniable fact; Descartes would say that "'I think, hence I
am ’ is true every time I pronounce it." Yet, in a way, I
can entertain the possibility that from some completely
alien frame of reference even this pronouncement may be
nonsense.
The G*(5delians are held captive bv an idealized
Picture of a computer working exactlv as the designer planned
for it to work. They would see things more clearly if they
began by considering a servo-mechanism instead of a computer.
Suppose that X has devised a way of reading Y's
thoughts and that he is able to derive a set of rules which
perfectly predict Y's behavior. Has X thereby robbed Y of
that something special; has X stolen Y ’s mind? Does it
follow that X can now rely on the set of rules which he has
formulated about Y and feel perfectly confident that Y will
never surprise him? If he has lived his entire life in the
academic world, he might think so; but anyone with any
practical intelligence will know that X must always keep
open the possibility that he may be surprised. This is not
only true of humans but of machines as well. But, when we
51
are surprised, what do we do? We build another model; we
are made that way— it's in our hardware.
What now about the problem of consciousness? I
have already shown that our cybernetics image of man reveals
consciousness as it refers to someone else— to his behavioral states. Here, we can use consciousness to label
the behavior of a machine. Someone might ask me at this
point, "Keaton, don't you feel a little strange saying, 'a
machine is conscious?'" Suppose I reply, "Yes, I do feel
strange." But, is this feeling anything more than something
occasioned by my linguistic habits? Suppose a child grew up
in a world where machines behaved in many ways like men.
Would he not say, "Of course, machines are conscious," and
think nothing of it.
If these responses do not satisfy you, let us try
another way. Suppose I were to ask you, "How do you know
other people are conscious— that they have that feeling of
consciousness?" Can you give me satisfactory criteria for
determining if another person is conscious and feels?
Let me suggest the following: Imagine that I can
predict every action of someone else— that I can calculate
his reactions in advance. Also, I have completely elaborated
a model of him and have programmed a computer so that it is
isomorphic with the system I consider him to be. Now, suppose that I can take direct read-offs— that I am literally
wired up to both the individual and the computer, and in
52
each case, whether I am attending to the man or computer, I
feel or experience sensations which are in perfect agreement
with the report of the "person(?)'* under observation. In
what way can it now make sense to say I do not experience
what he experiences? If you were to deny these criteria as
adequate then how can "experience what he experiences" have
any meaning? I feel sure that most people would accept ray
criteria, and if they do, how could they avoid accepting
the same criteria if a machine were the subject of an investigation? The point here is that any criteria which
would be capable of establishing that another man thought or
could feel would also be capable of establishing that a
machine could feel. And once we have criteria, the whole
thing becomes, in principle, an empirical matter.
Of course, even if we someday have the technology to
test both man and machines by these criteria and if it is
discovered that men can pass and no machine can, this will
not obviate the fact that men of our times do, in fact, act
as though other men are, with the qualifications I have made
earlier, determinate machines.
The ethical objection derives its force from one of
the most fundamental concepts of cybernetics itself— the
concept of feedback. Briefly, the objection is that the
widespread acceptance of the general systems image of man
will disrupt that system which we call society. Is social
life possible when man is viewed as a determinate machine?
53
An answer to this question can best be attempted after we
have considered just what is the nature of society when considered as a system.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment