Block’s “Psychologism and Behaviorism” has two aims: 1) hone in on the best version of behaviorism about intelligence, 2) show that even this very best behaviorist account of intelligence fails (for broadly ‘functionalist’ reasons).
Block’s preferred philosophical terminology is idiosyncratic...
Block’s description of the original Turing Test:
And for what is the Turing Test a test? Intelligence!
So to a first approximation, the conception of intelligence we find embedded in the Turing Test...
i. Intelligence just is the ability to pass the Turing Test (if it is given). (‘crude operationalism’)
Basic problem: Measuring instruments are fallible, so we shouldn’t confuse measurements for the thing being measured.
Initial solution: Put it in terms of behavioral dispositions instead...
ii. Intelligence just is the behavioral disposition to pass the Turing Test (if it is given). (Familiar Rylean Behaviorism)
Basic problem: “In sum, human judges may be unfairly chauvinist in rejecting genuinely intelligent machines, and they may be overly liberal in accepting cleverly-engineered, mindless machines.”
Initial solution: Replace the imitation game with a simpler game of ‘simply produce sensible verbal responses’
iii. “Intelligence (or more accurately, conversational intelligence) is the disposition to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.”
Basic problem: The standard functionalist objections to behaviorism.
But! And here’s the crucial point:
Upshot: The behavioral tests are maybe not necessary for intelligence, but they’re perhaps sufficient!
iv. “Intelligence (or, more accurately, conversational intelligence) is the capacity to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.” (Block’s Neo-Turing Test for Intelligence)
Perhaps behaviorism gives the right account of intelligence, even if not the right account for other kinds of mental states/properties...
Block’s objection to the Neo-Turing Test’s conception of intelligence comes down to a fairly simple claim: we can conceive of a system that has the capacity to produce perfectly sensible outputs, without itself producing those outputs intelligently
On-the-fly reasoning is more sophisticated than a simple lookup table--And it’s that more sophisticated thing that we’re trying to characterize.
Our concept of intelligence involves something more than just the ‘external’ pattern of sensory inputs and behavioral outputs. The internal pattern of information processing that produces those inputs/outputs also matters. Those patterns need to be produced in the right way--in a way that looks something like ‘abstract rational thought’ rather than some dumb lookup table.
TL;DR: functional organization matters to intelligence!
Objection 1: “Your argument is too strong in that it could be said of any intelligent machine that the intelligence it exhibits is that of its programmers.“
Block’s Reply: “The trouble with the neo-Turing Test conception of intelligence (and its predecessors) is precisely that it does not allow us to distinguish between behavior that reflects a machine's own intelligence, and behavior that, reflects only the intelligence of the machine's programmers.”
Objection 3(ish): This is a merely verbal dispute. You insist that the concept of intelligence ‘includes something more’ than mere input/output patterns. But unless you can specify what this extra special ingredient is, you’re just helping yourself to a magical (potentially-spooky?) conception of ‘intelligence’!
Block’s Reply: “...my point is based on the sort of information processing difference that exists.” All you need to grant is that there is an interesting and worthwhile difference between lookup-table algorithms and ‘on-the-fly’ reasoning algorithms. Call it whatever you want. That seems to be something we care about when we talk about ‘intelligence’.
Objection 6(ish): Are you sure what you’ve described is actually conceivable? Don’t you run into a problem where the physical size of your lookalike intelligence would have to be larger than the size of the physical universe?
Block’s Reply: “My argument requires only that the machine be logically possible, not that it be feasible or even nomologically possible. Behaviorist analyses were generally presented as conceptual analyses, and it is difficult to see how conceptions such as the neo-Turing Test conception could be seen in a very different light.”
My Objection (?): Doesn’t “actual and potential bahavior”--the sort of thing that a Rylean logical behaviorist likes--ultimately stem from the pattern of ‘internal’ infromation processing? That is, the two always in fact go together. And so for any postulated “lookalike” intelligence, so long as it’s finite (which it has to be), we can dream up scenarios in which it would malfunction in a telling way, or otherwise fail to demonstrate the full range of cognitive flexibility that a ‘real’ intelligence can exhibit.
A reply (?): Remember that Block’s point is conceptual, rather than empirical. Those two things (input/output patterns vs. information-processing patterns) may in fact be tightly connected. But that tight connection would be explained via the information-processing conception of intelligence.
(Ultimately, I suspect this comes down to realism vs. anti-realism about intelligence/cognition/rationality as a kind of real ‘achievement’, separable from less-philosophically-interesting brute causal laws. Wait for Dennett...)
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More