Grice knew how to lecture.
When invited to deliver the Kant lectures across the bay, at Stanford, he chose the right topic: "Aspects of reason and reasoning."
When invited to deliver the Locke lectures across the pond, at Oxford, he chose the right topic -- for surely Locke predates Kant and understands English idioms like 'to reason' better: "Aspects of reason and reasoning".
Reasoning is related to problem-solving, because people trying to solve a reasoning task have a definite goal and the solution is not obvious.
However, problem-solving and reasoning are typically treated separately.
Reasoning problems differ from other kinds of problems in that they often owe their origins to systems of formal logic, as symbolised by Frege and 'laughed' metaphorically by Grice in "Logic and Conversation," but taken slightly more seriously by his best friend (according to Myro), George Myro in his "Rudiments of Logic" ("dedicated to Paul").
There are clear overlaps between the two areas, which may differ less than one might initially suppose.
Inductive reasoning involves making a generalised conclusion from premises referring to particular instances.
Hypotheses can never be shown to be logically true by simply generalising from confirming instances (i.e., induction).
Generalisations provide no certainty for future events.
Deductive reasoning allows us to draw conclusions that are definitely valid provided that other statements are assumed to be true.
For example, if we assume
i. Grice is taller than Strawson.
ii. Strawson is taller than Warnock.
the conclusion
iii. Grice is taller than Warnock
-- Strawson and Warnock, like Grice, are Oxonian philosophers of ordinary language and they should, by rule, be more or less of the same height --.
is necessarily true -- not for Quine for whom nothing is necessarily true. He is trading on Gershwin, "It ain't necessarily so".
It is well known that Popper argues for a distinction between confirmation and falsification.
Confirmation involves obtaining evidence to confirm the correctness of one’s hypothesis.
Falsification involves attempting to falsify hypotheses by experimental tests.
Popper argues that it is impossible to achieve confirmation via hypothesis testing.
Rather, scientists should focus on falsification.
When Johnson-Laird and Wason devised their tests, such as "the 2–4–6 task", in which participants have to discover a relational rule underlying a set of three numbers, they found performance was poor on the task because people tended to show confirmation bias – they generated numbers conforming to their original hypothesis rather than trying hypothesis disconfirmation.
A positive test is when numbers produced are an instance of your hypothesis.
A negative test is when numbers produced do not conform to your hypothesis.
Wason’s and Johnson-Laird's theoretical position predicts that people should perform better
when instructed to engage in disconfirmatory testing.
The evidence, however, and just to delight Grice, is mixed -- and "on the rocks".
Cowley and Byrne argue that people show confirmation bias because they are loath to abandon their own initial hypothesis.
Tweney finds that performance on the 2–4–6 task was enhanced when participants were told to discover two rules, one the complement of the other.
Gale and Ball argue that it was important for participants to identify the crucial dimensions of ascending vs. descending numbers.
Performance on the Johnson-Laird's and Wason's 2–4–6 task involves separable processes of hypothesis generation and hypothesis testing.
Cherubini (not the author of "Medea") argues that participants try to preserve as much of the information contained in the example triple (i.e., 2–4–6) as possible in their initial hypothesis.
As a result, this hypothesis is typically much more specific than the correct rule.
Most hypotheses are sparse or narrow in that they apply to less than half the possible entities in any given domain (vide Navarro and Perfors).
The 2–4–6 problem is a valuable source of information about inductive reasoning.
The findings from the 2–4–6 task may not be generalisable because, in the real world, positive testing is not penalised.
Additional factors come into play in the real world.
Hypothesis testing in simulated and real-world settings.
There is a popular view that "scientific discovery is the result of genius, inspiration, and sudden insight" (Trickett and Trafton).
That view is largely incorrect.
Scientists (that Grice never revere -- 'we philosophers are into hypostasis; while mere scientists can only grasp hypothesis') typically use what Klahr and Simon describe as weak methods.
Kulkarni and Simon found scientists make extensive use of the unusualness heuristic, or rule of thumb.
This involves focusing on unusual or unexpected findings and then using them to guide future theorising and research.
Trickett and Trafton argue that scientists make much use of “what if” reasoning in which they work out what would happen in various imaginary circumstances.
Dunbar uses a simulated research environment.
Dunbar finds that participants who simply tried to find data consistent with their hypothesis failed to
solve the problem.
It is believed that scientists should focus on falsifying their hypotheses.
However this does not tend to happen.
Nearly all reasoning in everyday life is inductive rather than deductive.
Hypothesis testing is a form of inductive reasoning.
It is well known that Popper argued that it is impossible to confirm a hypothesis via hypothesis testing.
Rather, scientists should focus on falsification.
However, it is now accepted that Popper’s views are over-simplified and confirmation is often appropriate in real scientific research.
When Johnson-Laird and Wason devise stuff like the 2–4–6 task, they find people tended to show confirmation bias, producing sequences that confirmed their hypotheses rather than seeking negative evidence.
Later studies demonstrate that people’s behaviour is often more accurately described as confirmatory or positive testing.
"What if", or conditional reasoning is basically reasoning with “if”.
It has been studied to decide if human reasoning is logical.
In propositional logic, meanings are different from those in natural language.
There are different types of logical reasoning statements:
"Affirmation of the consequent":
Premise (if P then Q), (Q), Conclusion (P).
Invalid form of argument.
"Denial of the antecedent":
Premise (if P, Q), (~ P), Conclusion (~ P).
Invalid form of argument.
"Modus tollens":
Premise (If P, Q), (not Q), Conclusion (not P).
Valid form of argument.
"Modus ponens"
Premise (If P, Q), (P), Conclusion (Q).
Valid form of argument.
Invalid inferences (denial of the consequent, affirmation of the consequent) are accepted much of the time, the former typically more often (Evans).
Neys finds evidence that conditional reasoning is strongly influenced by the availability of knowledge in the form of counterexamples appearing to invalidate a given conclusion.
Neys also finds performance on conditional reasoning tasks depends on individual differences.
Bonnefon argues that reasoners draw inferences when presented with conditional reasoning problems.
According to Markovits, there are two strategies people can use with problems: a statistical strategy and a counterexample strategy.
Various findings suggest many people fail to think logically on conditional reasoning tasks.
Conditional reasoning is closer to decision making than to classical logic (Bonnefon).
The Johnson-Laird/Wason selection task has four cards, each with a number on one side and a letter on the other.
Participants are told a rule and asked to select only those cards that must be turned over to decide if the rule is correct.
The correct answer is only given by 5–10% of those who are engaged in the experiment.
Many attempts have been made to account for performance on this task.
Evans identifies matching bias as an important factor.
This is the tendency for participants to select cards matching items named in the rule regardless of whether the matched items are correct.
Stenning and van Lambalgen argue that people have difficulties interpreting precisely what the selection problem is all about.
Oaksford argues that the logical answer to the Johnson-Laird/Wason selection task conflicts with what typically makes most sense in everyday life.
Performance in the Johnson-Laird/Wason selection task can be improved by making the underlying structure of the problem more explicit (Girotto) or by motivating participants to disprove the rule (Dawson).
A syllogism consists of two premises or statements followed by a conclusion.
The validity of the conclusion depends only on whether it follows logically from the premises.
Belief bias is when people accept believable conclusions and reject unbelievable conclusions, irrespective of their logical validity or invalidity.
Klauer finds various biases in syllogistic reasoning, including a base-rate effect, in which performance is influenced by the perceived probability of syllogisms being valid.
Stupple and Ball find with syllogistic reasoning that people took longer to process unbelievable
premises than believable ones.
Stupple finds participants were more likely to accept conclusions that matched the premises in surface features than those not matching.
Conditional reasoning has its origins in a system of logic known as propositional logic.
Performance on conditional reasoning problems is typically better for the "modus ponens" inference than for other inferences (e.g., "modus tollens").
Conditional reasoning is influenced by context effects (e.g., the inclusion of additional premises).
Performance on the Wason and Johnson-Laird selection task is generally very poor, but is markedly better when the rule is deontic (or "practical" as Grice prefers) rather than indicative (or "alethic" as Grice prefers -- 'indicative', he said, "is not a mood; it's a mode.")
Performance on syllogistic reasoning tasks is affected by various biases, including belief bias and the base rate.
The fact that performance on deductive reasoning tasks is prone to error and bias suggests people often fail to reason logically.
The mental models approach is one of the most influential approaches and was proposed by Johnson-Laird.
A mental model represents a possibility, capturing what is common to the different ways in which the possibility could occur.
People use the information contained in the premises to construct a mental model.
Here are the main assumptions of mental model theory.
A mental model describing the given situation is constructed and the conclusions that follow
are generated.
The model is iconic (its structure corresponds to what it represents).
An attempt is made to construct alternative models to falsify the conclusion by finding counterexamples to the conclusion.
If a counterexample model is not found, the conclusion is assumed to be valid.
The construction of mental models involves the limited resources of working memory.
Reasoning problems requiring the construction of several mental models are harder to solve than those requiring only one mental model because of increased demands on working memory.
The principle of truth states that individuals minimise the load on working memory by tending to construct mental models that represent explicitly only what is true, and not what is false (Johnson-Laird).
Successful thinking results from the use of appropriate mental models.
Unsuccessful thinking occurs when we use inappropriate mental models.
Knauff finds deductive reasoning is slower when it involves visual imagery (cfr. Grice, "The Causal Theory of Perception" -- That pillar box seems red to me +> It ain't).
Copeland and Radvansky test this assumption.
Copeland and Radvansky find a moderate correlation between working memory capacity and syllogistic reasoning.
Copeland and Radvansky also find that problems requiring more mental models had longer response times.
Legrenzi (not the author of "Eteocle") tests the principle of truth.
Legrenzi finds performance was high on problems when adherence to the principle of truth is sufficient.
In contrast, there are illusory inferences when the principle of truth did not permit correct inferences to be drawn.
People are less susceptible to such inferences if explicitly instructed to falsify the premises of reasoning problems (Newsome and Johnson-Laird).
Newstead, Eysenck, and Keane also find participants consistently fail to produce more mental models for multiple-model syllogisms than for single-model ones.
Most predictions of mental model theory have been confirmed experimentally.
In particular, evidence shows that people make errors by using the principle of truth and ignoring what is false.
But Grice made his name by sticking with "suggestio veri" and "suggestio falsi", i.e. impicature.
Limitations with the theory are that it assumes that people engage in deductive reasoning to a greater extent than is actually the case.
The processes involved in forming mental models are under-specified.
There are two process involved in human reasoning.
One system involves unconscious processes and parallel processing, and is independent of intelligence.
The other system involves conscious processes and rule-based serial processing, has limited capacity and is linked to intelligence.
Evans proposes the heuristic–analytic theory of reasoning, which distinguishes between heuristic processes (System 1) and analytic processes (System 2).
Initially, heuristic processes use task features and knowledge to construct a single mental model.
Later, effortful analytic processes may intervene to revise this model.
This is more likely when task instructions tell participants to use abstract or logical reasoning;
participants are highly intelligent; sufficient time is available for effortful
analytic processing; or participants need to justify their reasoning.
Human reasoning is based on the use of three principles: the Singularity principle, the Relevance principle (not to be confused with Grice's conversational category of Relatio, after Kant), and the Satisficing principle.
In contrast to the mental model theory, the heuristic–analytic theory predicts that people initially use their world knowledge and immediate context in reasoning.
Deductive reasoning is regarded as less important.
Belief bias is a useful phenomenon for distinguishing between heuristic and analytic processes.
Evans finds less evidence of belief bias when instructions emphasised logical reasoning.
Stupple compares groups of participants who show much evidence of belief bias and those showing little belief bias.
Those with high levels of belief bias respond faster on syllogistic reasoning problems.
Neys finds high working memory capacity is an advantage only on problems requiring the use of analytic processes.
A secondary task impaired performance only on problems requiring analytic processes.
Evans and Curtis-Holmes find belief bias was stronger when time was strictly limited.
Thompson suggests two processes are used in syllogistic reasoning.
Participants provided an intuitive answer.
This is followed by an assessment of that answer’s correctness (feeling of rightness).
After which, participants have unlimited time to reconsider their initial answer and provide a final
answer (analytic or deliberate answer).
Thompson argues that we possess a monitoring system (assessed by the feeling-of-rightness ratings) that evaluates the output of heuristic or intuitive processes.
Evidence that people are more responsive to the logical structure of reasoning problems than suggested by performance accuracy is reported by Neys.
The heuristic–analytic theory of reasoning has several successes.
The notion that cognitive processes used by individuals to solve reasoning problems are the same as those used in other cognitive tasks seems correct.
Evidence supports the notion that thinking is based on singularity, relevance and satisficing principles.
There is convincing evidence for the distinction between heuristic and analytic processes.
The theory accounts for individual differences, for example in working memory capacity.
Limitations with the approach are that it is an oversimplification to distinguish between implicit heuristic and explicit analytic processes.
Also, the distinction between heuristic and analytic processes poses the problem of
working out exactly how these two different kinds of processes interact.
It is not very clear precisely what the analytic processes are or how individuals decide which ones
to use.
Logical processing can involve heuristic or intuitive processes occurring below the conscious level.
The assumption that heuristic processing is followed by analytic processing in a serial fashion may not be entirely correct.
According to mental model theory, people construct one or more mental models, mainly representing explicitly what is true.
Mental model theory fails to specify in detail how the initial mental models are constructed, and
people often form fewer mental models than expected.
Dual-system theories answer the two main limitations of most other research into human reasoning
because they take account of individual differences in performance and
processes.
There is now convincing evidence for a distinction between relatively automatic, heuristic-based processes and more effortful analytic-based processes.
However, it is unlikely that we can capture all the richness of human reasoning simply by assuming the existence of two cognitive systems.
Prado finds the brain system for deductive reasoning is centred in the left hemisphere involving frontal and parietal areas.
Specific brain areas activated during deductive reasoning included the inferior frontal gyrus; the middle frontal gyrus; the medial frontal gyrus; the precentral gyrus; and the basal ganglia. (And Geary thinks we think with our fingers!)
Goel studies patients having damage to left or right parietal cortex.
Those with left-side damage perform worse than those with right-side damage on reasoning tasks in which complete information is provided.
Prado finds the precise brain areas associated with deductive reasoning depended to a large
extent on the nature of the task.
Prado also finds that the left inferior frontal gyrus (BA9/44) is more activated during the processing of categorical arguments.
Prado finds found the left precentral gyrus (BA6) was more activated with propositional reasoning than with categorical or relational reasoning.
Language seems to play little or no role in processing of reasoning tasks post-reading (Monti and Osherson).
Reverberi identifies three strategies used in categorical reasoning.
One strategy is sensitivity to the logical form of problems (the left inferior lateral frontal (BA44/45) and superior medial frontal (BA6/8) areas).
A second strategy is sensitivity to the validity of conclusions (i.e., accurate
performance) -- the left ventro-lateral frontal (BA47) area.
In the use of heuristic strategies, no specific pattern of brain activation.
More intelligent individuals exhibit less belief bias because they make more use of analytic
processing strategies (Neys).
Individual differences in performance accuracy (and thus low belief bias) are strongly associated with activation in the right inferior frontal cortex under low and high cognitive load conditions
(Tsujii and Watanabe).
Fangmeier uses mental model theory as the basis for assuming the existence of three stages of processing in relational reasoning.
Different brain areas are associated with each stage, i.e. premise processing: temporo-occipital
activation reflecting the use of visuo-spatial processing.
Then there's premise integration: anterior prefrontal cortex (e.g., BA10), an area associated
with executive processing.
Finally, there is Validation: the posterior parietal cortex was activated, as were areas within the prefrontal cortex (BA6, BA8) and the dorsal cingulate cortex.
Bonnefond studies the brain processes associated with conditional reasoning focusing on modus ponens.
There is enhanced brain activity when premises and conclusions do not match and anticipatory processing before the second premise occurs when they match.
Limited progress has been made in identifying the brain systems involved in deductive reasoning.
This is because of simple task differences and individual differences that affect the results.
Informal reasoning is a form of reasoning based on one’s knowledge and experience.
People make extensive use of informal reasoning processes such as heuristics in formal deductive reasoning tasks.
However, there are also differences between processes in formal and informal reasoning: content;
contextual factors; informal reasoning concerns probabilities; and motivation.
Ricco identifies common informal fallacies.
There's Irrelevance (seeking to support a claim with an irrelevant reason), and there's Slippery slope.
The my-side bias is the tendency to evaluate statements with respect to one’s own beliefs rather
than solely on their merits (Stanovich and West).
Support for the probabilistic approach is reported by Hahn and Oaksford.
Hahn and Oaksford identify several factors influencing the perceived strength of a conclusion: degree of previous conviction or belief; positive arguments have more impact than negative arguments; and strength of the evidence.
Hahn and Oaksford find a Bayesian model predicted informal reasoning performance very well.
However, Bowers and Davis argue that the Bayesian approach is too flexible and thus hard to falsify.
Sá finds unsophisticated reasoning is more common among those of lower cognitive ability.
Informal reasoning is more important in everyday life than deductive reasoning.
However, most reasoning research is far removed from everyday life.
Hahn and Oaksford propose a framework for research on informal reasoning based on probabilistic principles.
There is reasonable support for their model, particularly for the role of prior belief and new evidence on strength of argument.
In the future, it will be important to establish the similarities and differences in processes underlying performance on informal and deductive reasoning tasks.
But, to go back to Grice (and Kantotle, his favourite philosopher) re humans rational?
Much evidence seems to indicate that our thinking and reasoning are often inadequate, suggesting that we are not rational, even if Grice thought he was.
Human performance on deductive reasoning tasks does seem very prone to error.
Most people cope well with problems in everyday life, yet seem irrational and illogical when given reasoning problems in the laboratory.
However, it may well be that our everyday thinking is less rational than we believe.
Heuristics allow us to make rapid, reasonably accurate, judgements and decisions, as Maule and Hodgkinson point out.
Laboratory research findings suggest people can think rationally when problems are presented in a readily understandable form.
Many of the apparent "errors" on deductive reasoning tasks may also be less serious than they seem.
There is reasonable support for the notion that factors such as participants’ misinterpretation of problems, or lack of motivation, explain only a fraction of errors in thinking and reasoning
(e.g., Camerer and Hogarth).
Individual differences in intelligence and working memory also influence performance on conditional reasoning tasks.
Some researchers have found inadequacies in performance even when steps are taken to ensure that participants fully understand the problem (e.g., Tversky and Kahneman’s conjunction fallacy study).
Interestingly, those who are incompetent have little insight into their reasoning failures.
This is the Dunning–Kruger effect.
Deciding whether humans are rational depends on how we define “rationality”, with which Popperians might not agree.
Sternberg points out that few problems of consequence in our lives had a deductive or even any meaningful kind of ‘ correct’ solution.
Normativism “is the idea that human thinking reflects a normative system one conforming to norms or standards against which it should be measured and judged. (Elqayam and Evans).
An alternative approach is that human rationality involves effective use of probabilities rather than logic.
Oaksford and Chater put forward an influential probabilistic approach to human reasoning.
Simon suggests the notion of bounded rationality should be considered in human reasoning.
This means an individual’s informal reasoning is rational if it achieves his/her goal of arguing persuasively.
Many “errors” in human thinking are due to limited processing capacity rather than irrationality (Cfr. Pears, "Motivated irrationality").
Toplak reports a correlation of +0.32 between cognitive ability and performance across 15 judgement and decision tasks.
Stanovich developed the tripartite model with two levels of processing.
One is Type 1 processing (e.g., use of heuristics) within the autonomous mind is rapid and fairly automatic.
The other is Type 2 processing (also called System 2 processing), which is slow and effortful.
There are three different reasons why individuals produce incorrect responses when confronted by problems: the individual lacks the mindware (e.g., rules, strategies) to override the heuristic response; or the individual has the necessary mindware but fails to realise the need to override the
heuristic response; or the individual has the necessary mindware and realises that the heuristic response should be overridden, but doesn’t have sufficient decoupling capacity.
Stanovich coined a hybrid term (that scared Grice) "dysrationalia", to refer to the INability to think and behave rationally despite having adequate intelligence.
Most people (including those with high IQs) are cognitive misers, preferring to solve problems with fast, easy strategies than with more accurate effortful ones.
Humans can still be considered rational (and so Barbara remains analytic:
iv. All men are rational.
v. Socrates is a man.
vi. Therefore, Socrates is rational.
-- because errors are caused by limited processing capacity (Simon).
So-called classical logic (what Grice means by this in "Logic and Conversation", i,e. "Principia" and its heirs) is almost totally irrelevant to our everyday lives because it deals in certainties.
Our thinking and reasoning are rational when used to achieve our goals.
However, humans can be considered irrational because many humans are cognitive misers.
There is a widespread tendency on judgement tasks to de-emphasise base-rate information.
They fail to think rationally because they are unaware of limitations and errors in their thinking.
Apparently poor performance by most people on deductive reasoning tasks does not mean we are illogical and irrational because of the existence of the normative system problem,
the interpretation problem and the external validity problem.
Yet, when Grandy and Warner decided for a festschrift for P. Grice, they came up with "Philosophical Grounds of Rationality: Intentions, Categories, Ends", but then it's an acronym: PGRICE ("and Clarendon didn't notice!")
However, problem-solving and reasoning are typically treated separately.
Reasoning problems differ from other kinds of problems in that they often owe their origins to systems of formal logic, as symbolised by Frege and 'laughed' metaphorically by Grice in "Logic and Conversation," but taken slightly more seriously by his best friend (according to Myro), George Myro in his "Rudiments of Logic" ("dedicated to Paul").
There are clear overlaps between the two areas, which may differ less than one might initially suppose.
Inductive reasoning involves making a generalised conclusion from premises referring to particular instances.
Hypotheses can never be shown to be logically true by simply generalising from confirming instances (i.e., induction).
Generalisations provide no certainty for future events.
Deductive reasoning allows us to draw conclusions that are definitely valid provided that other statements are assumed to be true.
For example, if we assume
i. Grice is taller than Strawson.
ii. Strawson is taller than Warnock.
the conclusion
iii. Grice is taller than Warnock
-- Strawson and Warnock, like Grice, are Oxonian philosophers of ordinary language and they should, by rule, be more or less of the same height --.
is necessarily true -- not for Quine for whom nothing is necessarily true. He is trading on Gershwin, "It ain't necessarily so".
It is well known that Popper argues for a distinction between confirmation and falsification.
Confirmation involves obtaining evidence to confirm the correctness of one’s hypothesis.
Falsification involves attempting to falsify hypotheses by experimental tests.
Popper argues that it is impossible to achieve confirmation via hypothesis testing.
Rather, scientists should focus on falsification.
When Johnson-Laird and Wason devised their tests, such as "the 2–4–6 task", in which participants have to discover a relational rule underlying a set of three numbers, they found performance was poor on the task because people tended to show confirmation bias – they generated numbers conforming to their original hypothesis rather than trying hypothesis disconfirmation.
A positive test is when numbers produced are an instance of your hypothesis.
A negative test is when numbers produced do not conform to your hypothesis.
Wason’s and Johnson-Laird's theoretical position predicts that people should perform better
when instructed to engage in disconfirmatory testing.
The evidence, however, and just to delight Grice, is mixed -- and "on the rocks".
Cowley and Byrne argue that people show confirmation bias because they are loath to abandon their own initial hypothesis.
Tweney finds that performance on the 2–4–6 task was enhanced when participants were told to discover two rules, one the complement of the other.
Gale and Ball argue that it was important for participants to identify the crucial dimensions of ascending vs. descending numbers.
Performance on the Johnson-Laird's and Wason's 2–4–6 task involves separable processes of hypothesis generation and hypothesis testing.
Cherubini (not the author of "Medea") argues that participants try to preserve as much of the information contained in the example triple (i.e., 2–4–6) as possible in their initial hypothesis.
As a result, this hypothesis is typically much more specific than the correct rule.
Most hypotheses are sparse or narrow in that they apply to less than half the possible entities in any given domain (vide Navarro and Perfors).
The 2–4–6 problem is a valuable source of information about inductive reasoning.
The findings from the 2–4–6 task may not be generalisable because, in the real world, positive testing is not penalised.
Additional factors come into play in the real world.
Hypothesis testing in simulated and real-world settings.
There is a popular view that "scientific discovery is the result of genius, inspiration, and sudden insight" (Trickett and Trafton).
That view is largely incorrect.
Scientists (that Grice never revere -- 'we philosophers are into hypostasis; while mere scientists can only grasp hypothesis') typically use what Klahr and Simon describe as weak methods.
Kulkarni and Simon found scientists make extensive use of the unusualness heuristic, or rule of thumb.
This involves focusing on unusual or unexpected findings and then using them to guide future theorising and research.
Trickett and Trafton argue that scientists make much use of “what if” reasoning in which they work out what would happen in various imaginary circumstances.
Dunbar uses a simulated research environment.
Dunbar finds that participants who simply tried to find data consistent with their hypothesis failed to
solve the problem.
It is believed that scientists should focus on falsifying their hypotheses.
However this does not tend to happen.
Nearly all reasoning in everyday life is inductive rather than deductive.
Hypothesis testing is a form of inductive reasoning.
It is well known that Popper argued that it is impossible to confirm a hypothesis via hypothesis testing.
Rather, scientists should focus on falsification.
However, it is now accepted that Popper’s views are over-simplified and confirmation is often appropriate in real scientific research.
When Johnson-Laird and Wason devise stuff like the 2–4–6 task, they find people tended to show confirmation bias, producing sequences that confirmed their hypotheses rather than seeking negative evidence.
Later studies demonstrate that people’s behaviour is often more accurately described as confirmatory or positive testing.
"What if", or conditional reasoning is basically reasoning with “if”.
It has been studied to decide if human reasoning is logical.
In propositional logic, meanings are different from those in natural language.
There are different types of logical reasoning statements:
"Affirmation of the consequent":
Premise (if P then Q), (Q), Conclusion (P).
Invalid form of argument.
"Denial of the antecedent":
Premise (if P, Q), (~ P), Conclusion (~ P).
Invalid form of argument.
"Modus tollens":
Premise (If P, Q), (not Q), Conclusion (not P).
Valid form of argument.
"Modus ponens"
Premise (If P, Q), (P), Conclusion (Q).
Valid form of argument.
Invalid inferences (denial of the consequent, affirmation of the consequent) are accepted much of the time, the former typically more often (Evans).
Neys finds evidence that conditional reasoning is strongly influenced by the availability of knowledge in the form of counterexamples appearing to invalidate a given conclusion.
Neys also finds performance on conditional reasoning tasks depends on individual differences.
Bonnefon argues that reasoners draw inferences when presented with conditional reasoning problems.
According to Markovits, there are two strategies people can use with problems: a statistical strategy and a counterexample strategy.
Various findings suggest many people fail to think logically on conditional reasoning tasks.
Conditional reasoning is closer to decision making than to classical logic (Bonnefon).
The Johnson-Laird/Wason selection task has four cards, each with a number on one side and a letter on the other.
Participants are told a rule and asked to select only those cards that must be turned over to decide if the rule is correct.
The correct answer is only given by 5–10% of those who are engaged in the experiment.
Many attempts have been made to account for performance on this task.
Evans identifies matching bias as an important factor.
This is the tendency for participants to select cards matching items named in the rule regardless of whether the matched items are correct.
Stenning and van Lambalgen argue that people have difficulties interpreting precisely what the selection problem is all about.
Oaksford argues that the logical answer to the Johnson-Laird/Wason selection task conflicts with what typically makes most sense in everyday life.
Performance in the Johnson-Laird/Wason selection task can be improved by making the underlying structure of the problem more explicit (Girotto) or by motivating participants to disprove the rule (Dawson).
A syllogism consists of two premises or statements followed by a conclusion.
The validity of the conclusion depends only on whether it follows logically from the premises.
Belief bias is when people accept believable conclusions and reject unbelievable conclusions, irrespective of their logical validity or invalidity.
Klauer finds various biases in syllogistic reasoning, including a base-rate effect, in which performance is influenced by the perceived probability of syllogisms being valid.
Stupple and Ball find with syllogistic reasoning that people took longer to process unbelievable
premises than believable ones.
Stupple finds participants were more likely to accept conclusions that matched the premises in surface features than those not matching.
Conditional reasoning has its origins in a system of logic known as propositional logic.
Performance on conditional reasoning problems is typically better for the "modus ponens" inference than for other inferences (e.g., "modus tollens").
Conditional reasoning is influenced by context effects (e.g., the inclusion of additional premises).
Performance on the Wason and Johnson-Laird selection task is generally very poor, but is markedly better when the rule is deontic (or "practical" as Grice prefers) rather than indicative (or "alethic" as Grice prefers -- 'indicative', he said, "is not a mood; it's a mode.")
Performance on syllogistic reasoning tasks is affected by various biases, including belief bias and the base rate.
The fact that performance on deductive reasoning tasks is prone to error and bias suggests people often fail to reason logically.
The mental models approach is one of the most influential approaches and was proposed by Johnson-Laird.
A mental model represents a possibility, capturing what is common to the different ways in which the possibility could occur.
People use the information contained in the premises to construct a mental model.
Here are the main assumptions of mental model theory.
A mental model describing the given situation is constructed and the conclusions that follow
are generated.
The model is iconic (its structure corresponds to what it represents).
An attempt is made to construct alternative models to falsify the conclusion by finding counterexamples to the conclusion.
If a counterexample model is not found, the conclusion is assumed to be valid.
The construction of mental models involves the limited resources of working memory.
Reasoning problems requiring the construction of several mental models are harder to solve than those requiring only one mental model because of increased demands on working memory.
The principle of truth states that individuals minimise the load on working memory by tending to construct mental models that represent explicitly only what is true, and not what is false (Johnson-Laird).
Successful thinking results from the use of appropriate mental models.
Unsuccessful thinking occurs when we use inappropriate mental models.
Knauff finds deductive reasoning is slower when it involves visual imagery (cfr. Grice, "The Causal Theory of Perception" -- That pillar box seems red to me +> It ain't).
Copeland and Radvansky test this assumption.
Copeland and Radvansky find a moderate correlation between working memory capacity and syllogistic reasoning.
Copeland and Radvansky also find that problems requiring more mental models had longer response times.
Legrenzi (not the author of "Eteocle") tests the principle of truth.
Legrenzi finds performance was high on problems when adherence to the principle of truth is sufficient.
In contrast, there are illusory inferences when the principle of truth did not permit correct inferences to be drawn.
People are less susceptible to such inferences if explicitly instructed to falsify the premises of reasoning problems (Newsome and Johnson-Laird).
Newstead, Eysenck, and Keane also find participants consistently fail to produce more mental models for multiple-model syllogisms than for single-model ones.
Most predictions of mental model theory have been confirmed experimentally.
In particular, evidence shows that people make errors by using the principle of truth and ignoring what is false.
But Grice made his name by sticking with "suggestio veri" and "suggestio falsi", i.e. impicature.
Limitations with the theory are that it assumes that people engage in deductive reasoning to a greater extent than is actually the case.
The processes involved in forming mental models are under-specified.
There are two process involved in human reasoning.
One system involves unconscious processes and parallel processing, and is independent of intelligence.
The other system involves conscious processes and rule-based serial processing, has limited capacity and is linked to intelligence.
Evans proposes the heuristic–analytic theory of reasoning, which distinguishes between heuristic processes (System 1) and analytic processes (System 2).
Initially, heuristic processes use task features and knowledge to construct a single mental model.
Later, effortful analytic processes may intervene to revise this model.
This is more likely when task instructions tell participants to use abstract or logical reasoning;
participants are highly intelligent; sufficient time is available for effortful
analytic processing; or participants need to justify their reasoning.
Human reasoning is based on the use of three principles: the Singularity principle, the Relevance principle (not to be confused with Grice's conversational category of Relatio, after Kant), and the Satisficing principle.
In contrast to the mental model theory, the heuristic–analytic theory predicts that people initially use their world knowledge and immediate context in reasoning.
Deductive reasoning is regarded as less important.
Belief bias is a useful phenomenon for distinguishing between heuristic and analytic processes.
Evans finds less evidence of belief bias when instructions emphasised logical reasoning.
Stupple compares groups of participants who show much evidence of belief bias and those showing little belief bias.
Those with high levels of belief bias respond faster on syllogistic reasoning problems.
Neys finds high working memory capacity is an advantage only on problems requiring the use of analytic processes.
A secondary task impaired performance only on problems requiring analytic processes.
Evans and Curtis-Holmes find belief bias was stronger when time was strictly limited.
Thompson suggests two processes are used in syllogistic reasoning.
Participants provided an intuitive answer.
This is followed by an assessment of that answer’s correctness (feeling of rightness).
After which, participants have unlimited time to reconsider their initial answer and provide a final
answer (analytic or deliberate answer).
Thompson argues that we possess a monitoring system (assessed by the feeling-of-rightness ratings) that evaluates the output of heuristic or intuitive processes.
Evidence that people are more responsive to the logical structure of reasoning problems than suggested by performance accuracy is reported by Neys.
The heuristic–analytic theory of reasoning has several successes.
The notion that cognitive processes used by individuals to solve reasoning problems are the same as those used in other cognitive tasks seems correct.
Evidence supports the notion that thinking is based on singularity, relevance and satisficing principles.
There is convincing evidence for the distinction between heuristic and analytic processes.
The theory accounts for individual differences, for example in working memory capacity.
Limitations with the approach are that it is an oversimplification to distinguish between implicit heuristic and explicit analytic processes.
Also, the distinction between heuristic and analytic processes poses the problem of
working out exactly how these two different kinds of processes interact.
It is not very clear precisely what the analytic processes are or how individuals decide which ones
to use.
Logical processing can involve heuristic or intuitive processes occurring below the conscious level.
The assumption that heuristic processing is followed by analytic processing in a serial fashion may not be entirely correct.
According to mental model theory, people construct one or more mental models, mainly representing explicitly what is true.
Mental model theory fails to specify in detail how the initial mental models are constructed, and
people often form fewer mental models than expected.
Dual-system theories answer the two main limitations of most other research into human reasoning
because they take account of individual differences in performance and
processes.
There is now convincing evidence for a distinction between relatively automatic, heuristic-based processes and more effortful analytic-based processes.
However, it is unlikely that we can capture all the richness of human reasoning simply by assuming the existence of two cognitive systems.
Prado finds the brain system for deductive reasoning is centred in the left hemisphere involving frontal and parietal areas.
Specific brain areas activated during deductive reasoning included the inferior frontal gyrus; the middle frontal gyrus; the medial frontal gyrus; the precentral gyrus; and the basal ganglia. (And Geary thinks we think with our fingers!)
Goel studies patients having damage to left or right parietal cortex.
Those with left-side damage perform worse than those with right-side damage on reasoning tasks in which complete information is provided.
Prado finds the precise brain areas associated with deductive reasoning depended to a large
extent on the nature of the task.
Prado also finds that the left inferior frontal gyrus (BA9/44) is more activated during the processing of categorical arguments.
Prado finds found the left precentral gyrus (BA6) was more activated with propositional reasoning than with categorical or relational reasoning.
Language seems to play little or no role in processing of reasoning tasks post-reading (Monti and Osherson).
Reverberi identifies three strategies used in categorical reasoning.
One strategy is sensitivity to the logical form of problems (the left inferior lateral frontal (BA44/45) and superior medial frontal (BA6/8) areas).
A second strategy is sensitivity to the validity of conclusions (i.e., accurate
performance) -- the left ventro-lateral frontal (BA47) area.
In the use of heuristic strategies, no specific pattern of brain activation.
More intelligent individuals exhibit less belief bias because they make more use of analytic
processing strategies (Neys).
Individual differences in performance accuracy (and thus low belief bias) are strongly associated with activation in the right inferior frontal cortex under low and high cognitive load conditions
(Tsujii and Watanabe).
Fangmeier uses mental model theory as the basis for assuming the existence of three stages of processing in relational reasoning.
Different brain areas are associated with each stage, i.e. premise processing: temporo-occipital
activation reflecting the use of visuo-spatial processing.
Then there's premise integration: anterior prefrontal cortex (e.g., BA10), an area associated
with executive processing.
Finally, there is Validation: the posterior parietal cortex was activated, as were areas within the prefrontal cortex (BA6, BA8) and the dorsal cingulate cortex.
Bonnefond studies the brain processes associated with conditional reasoning focusing on modus ponens.
There is enhanced brain activity when premises and conclusions do not match and anticipatory processing before the second premise occurs when they match.
Limited progress has been made in identifying the brain systems involved in deductive reasoning.
This is because of simple task differences and individual differences that affect the results.
Informal reasoning is a form of reasoning based on one’s knowledge and experience.
People make extensive use of informal reasoning processes such as heuristics in formal deductive reasoning tasks.
However, there are also differences between processes in formal and informal reasoning: content;
contextual factors; informal reasoning concerns probabilities; and motivation.
Ricco identifies common informal fallacies.
There's Irrelevance (seeking to support a claim with an irrelevant reason), and there's Slippery slope.
The my-side bias is the tendency to evaluate statements with respect to one’s own beliefs rather
than solely on their merits (Stanovich and West).
Support for the probabilistic approach is reported by Hahn and Oaksford.
Hahn and Oaksford identify several factors influencing the perceived strength of a conclusion: degree of previous conviction or belief; positive arguments have more impact than negative arguments; and strength of the evidence.
Hahn and Oaksford find a Bayesian model predicted informal reasoning performance very well.
However, Bowers and Davis argue that the Bayesian approach is too flexible and thus hard to falsify.
Sá finds unsophisticated reasoning is more common among those of lower cognitive ability.
Informal reasoning is more important in everyday life than deductive reasoning.
However, most reasoning research is far removed from everyday life.
Hahn and Oaksford propose a framework for research on informal reasoning based on probabilistic principles.
There is reasonable support for their model, particularly for the role of prior belief and new evidence on strength of argument.
In the future, it will be important to establish the similarities and differences in processes underlying performance on informal and deductive reasoning tasks.
But, to go back to Grice (and Kantotle, his favourite philosopher) re humans rational?
Much evidence seems to indicate that our thinking and reasoning are often inadequate, suggesting that we are not rational, even if Grice thought he was.
Human performance on deductive reasoning tasks does seem very prone to error.
Most people cope well with problems in everyday life, yet seem irrational and illogical when given reasoning problems in the laboratory.
However, it may well be that our everyday thinking is less rational than we believe.
Heuristics allow us to make rapid, reasonably accurate, judgements and decisions, as Maule and Hodgkinson point out.
Laboratory research findings suggest people can think rationally when problems are presented in a readily understandable form.
Many of the apparent "errors" on deductive reasoning tasks may also be less serious than they seem.
There is reasonable support for the notion that factors such as participants’ misinterpretation of problems, or lack of motivation, explain only a fraction of errors in thinking and reasoning
(e.g., Camerer and Hogarth).
Individual differences in intelligence and working memory also influence performance on conditional reasoning tasks.
Some researchers have found inadequacies in performance even when steps are taken to ensure that participants fully understand the problem (e.g., Tversky and Kahneman’s conjunction fallacy study).
Interestingly, those who are incompetent have little insight into their reasoning failures.
This is the Dunning–Kruger effect.
Deciding whether humans are rational depends on how we define “rationality”, with which Popperians might not agree.
Sternberg points out that few problems of consequence in our lives had a deductive or even any meaningful kind of ‘ correct’ solution.
Normativism “is the idea that human thinking reflects a normative system one conforming to norms or standards against which it should be measured and judged. (Elqayam and Evans).
An alternative approach is that human rationality involves effective use of probabilities rather than logic.
Oaksford and Chater put forward an influential probabilistic approach to human reasoning.
Simon suggests the notion of bounded rationality should be considered in human reasoning.
This means an individual’s informal reasoning is rational if it achieves his/her goal of arguing persuasively.
Many “errors” in human thinking are due to limited processing capacity rather than irrationality (Cfr. Pears, "Motivated irrationality").
Toplak reports a correlation of +0.32 between cognitive ability and performance across 15 judgement and decision tasks.
Stanovich developed the tripartite model with two levels of processing.
One is Type 1 processing (e.g., use of heuristics) within the autonomous mind is rapid and fairly automatic.
The other is Type 2 processing (also called System 2 processing), which is slow and effortful.
There are three different reasons why individuals produce incorrect responses when confronted by problems: the individual lacks the mindware (e.g., rules, strategies) to override the heuristic response; or the individual has the necessary mindware but fails to realise the need to override the
heuristic response; or the individual has the necessary mindware and realises that the heuristic response should be overridden, but doesn’t have sufficient decoupling capacity.
Stanovich coined a hybrid term (that scared Grice) "dysrationalia", to refer to the INability to think and behave rationally despite having adequate intelligence.
Most people (including those with high IQs) are cognitive misers, preferring to solve problems with fast, easy strategies than with more accurate effortful ones.
Humans can still be considered rational (and so Barbara remains analytic:
iv. All men are rational.
v. Socrates is a man.
vi. Therefore, Socrates is rational.
-- because errors are caused by limited processing capacity (Simon).
So-called classical logic (what Grice means by this in "Logic and Conversation", i,e. "Principia" and its heirs) is almost totally irrelevant to our everyday lives because it deals in certainties.
Our thinking and reasoning are rational when used to achieve our goals.
However, humans can be considered irrational because many humans are cognitive misers.
There is a widespread tendency on judgement tasks to de-emphasise base-rate information.
They fail to think rationally because they are unaware of limitations and errors in their thinking.
Apparently poor performance by most people on deductive reasoning tasks does not mean we are illogical and irrational because of the existence of the normative system problem,
the interpretation problem and the external validity problem.
Yet, when Grandy and Warner decided for a festschrift for P. Grice, they came up with "Philosophical Grounds of Rationality: Intentions, Categories, Ends", but then it's an acronym: PGRICE ("and Clarendon didn't notice!")
No comments:
Post a Comment