We take brain states and utterances and paintings and to represent things. What’s that about?
Philosophers throw around a lot of terms/jargon when talking about this basic phenomenon:
The basic idea: there are thinkers/reasoners in who communicate to one another, and can represent the world as one way rather than another, to both themselves (e.g. in deliberative thought) and others (e.g. in speech).
What's all this business about interpreting symbols/signals to mean one thing rather than another? What distinguishes a cognitive system that truly represents the world, and cognitive systems (if they can be called that) that just push electrical signals around?
What is Searle trying to show? He wants to show that running a computer program can’t be sufficient for having a mind/thinking. (Perhaps it is necessary).
Syntax vs. Semantics
“It is essential to our conception of a digital computer that its operations can be specified purely formally; that is, we specify the steps in the operation of the computer in terms of abstract symbols.” (670)
“But the symbols have no meaning; they have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure.” (670)
Duplication vs. Simulation
“Those features [consciousness, thoughts, feelings, emotions], by definition, the computer is unable to duplicate however powerful may be its ability to simulate. ...No simulation ever constitutes duplication.” (673)
“...nobody supposes that the computer simulation is actually the real thing; no one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down.” (673)
(Above two quotes come from an earlier version of this argument, presented in "Minds, Brains and Computers")
The Point: The room, as a whole, spits out meaningful Chinese sentences. But the man does not
understand Chinese.
The Systems Reply: There’s an important disanalogy: the man ought not be compared to the computer since the computer is analogous to the whole system—the rules books, the data banks of Chinese symbols, etc. The man is more like the CPU, where the rule book is like the program stored in memory. So perhaps the whole system does understand Chinese.
Searle’s Response: My point applies even if the man applies the rules to internal, memorized rule books. Neither the system nor the man understands Chinese.
An Additional Reply: There are actually two ‘programs’ running on the same ‘hardware’, the man. There are really two minds instantiated in the man. The Chinese program, instantiated in the man, understands Chinese, but the man does not. And the English program certainly doesn’t understand Chinese.
Wait--is this even an argument?
Basic problem: beats you over the head with “it’s mere syntax!” without really providing a positive account of what semantic content is, and why that semantic content has to be something over and above pure syntax.
Searle makes super clear that he believes you can't successfully reduce semantic truths to 'mere' syntactical truths, but I don't see much of an attempt to argue this point, rather than repeat the same basic idea over and over again.
If there's something more to the view than "isn't it obvious!", then he didn't include that reasoning here. Besides, this is an actively debated issue!
Searle leans into the familiar “digital computer” sense of “computer model...”, but that's not really warranted, given what proponents of that model actually believe. The charitable thing he could have said: there’s a sense of computation which isn’t tied to a specific artifact, but to a broader idea of information-processing and symbol manipulation as part of an individual system's strategy for acting upon the world and making decisions.
Taking philosophy seriously means taking your own temperament when reasoning seriously. And the right temperament is not the one on display in this paper.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More