Saturday, January 6, 2018

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 





see also:

Click here --> SEARLE VIDEO

51 comments:

  1. I really enjoy the Searle article and I believe he made a convincing argument against what he refers to as strong AI (computationalism). I wanted to present a counter-argument to Searle's pertaining to Schank's computer program as it compares to human native speakers of a language. My argument is towards a partially computationalist view of cognition, which I do not believe that Searle has adequately disproved in his article.
    On page 4, Searle asserts, "As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understand. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding."
    This is an extremist argument in my opinion, which completely rejects the possibility that computation could have anything to do with human cognition. I present an alternate program to Schank's with one added element: sprinkled through the story about the restaurant, made-up words would be inserted, with a prefix, suffix, or root word that would hint as to the made up words meaning. Imagine then that the computers were given the prefixes, suffixes and root words and taught how to make assumptions about meaning of these words. If the machines performed as well as humans at this task, could we not then turn our attention to not how the computers are "thinking" but how the humans are "computing". Is there really any conceptual difference between how a computer and a human would go about guessing a made-up English word's meaning if they were provided with the same background information? With situations such as these, I find it very difficult to agree whole-heartedly with Searle's complete dismissal of computationalism as part of the equation.
    I also just wanted to give a nod to Bertrand Russell's "On Denoting" and its presence in this article. On of Searle's most simple and convincing point was his distinction between the name and the meaning. While a human mind knows the meaning of "hamburger", the machine only reads the name. This is the fundamental difference between information processing and symbol manipulation and I see concepts rooted in Russel's paper, where he introduces the fundamental difference between naming an object and denoting it.

    ReplyDelete
  2. "Rather, whatever purely formal principles you put into a computer, they will not be sufficient for understanding, since a human will be able to follow formal principles without understanding anything".

    In Searle's refute against the principles of Strong AI, he argues that no machine that is purely based on formal symbols is capable of simulating a human's intentionality or ability to "think". With just the "syntax but not the semantics" he suggests, he outrightly rejects the possibility that computationalism could ever fulfill what I assume could also be another stab at the "hard question" of consciousness. Why do we feel or do the things we do? Why do we have a "system" that allows us to perceive the squiggle squaggle that is English to "mean" something to us but not the squiggle squaggle that are these foreign, uninterpretable Chinese symbols that are shown to us in an isolated room?

    While I do follow Searle's train of thought in the highlighted quote where the act of having to apply a program to cause the production of human outputs in a machine has an air of falseness that may lead one to conclude a lack of human "intentionality", I also have some personal thoughts and queries. When I personally think of what "understanding" means, I cannot separate the concept from "learning". By applying "learning" to what Searle is proposing, it is hard for me to understand how the learning of the formal abstractions / rules of symbols can be seen as so much less significant in the understanding of human mental states as the outcomes of learning the "causal effects" between symbols. While I do agree that the ability to extract "meaning" is something individualistic and something we can never really grasp if machines can truly do in the same way we do, isn't our ability to grasp the formalisms also a sort of computation that highly influences how we structure our so-called causal effects or "thinking"? Just because it is reproducible in a machine does it make is any less "human"?

    ReplyDelete
    Replies
    1. “he outrightly rejects the possibility that computationalism could ever fulfill what I assume could also be another stab at the "hard question" of consciousness”

      Hey Madeleine,
      I found this section of your comment interesting because it made realize I was not entirely sure whether Searle is trying to answer the hard question or the easy question. I am more inclined to think he is addressing the ‘easy question’ (albeit demonstrating how it is not so easy after all). I may very well have completely misunderstood something, but I’ll explain my rational anyway. The way I understood it was that Searle is actually arguing that strong AI cannot actually be correct: a properly programmed machine cannot be a mind, because it cannot actually do all that a mind can do, therefore we cannot claim that what it does (this programed computation, input/output stuff) could possibly explain the how and why of our minds do all that we do (the easy question?) given it can’t do something we can. The something that is missing is understanding, understanding being key to intentionality – the brain has causal properties that would need to be duplicated for there to be understanding and in the case described with strong AI, it simply isn’t (I’m thinking of the whole Chinese room example and the line “The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states” (p.8)). My explanation is pretty confusing and maybe not kid-sib friendly, so another way I think of it is going off what we’ve said in class:
      It feels like something to understand, but Searle is not saying the robot does not feel and therefore does not understand. I think he is not even getting to that. If the hard question would be looking at feeling, so asking that “if it feels like something to understand, then why and how” – then Searle is stopping before even asking if the machine feels understanding and how, because he’s demonstrated before that that the machine does not understand at all – just symbol manipulation.

      Delete
  3. "No purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running."

    Intentionality and causality seem to be benchmarks of cognition throughout Searle's argument against strong AI. Searle uses these concepts to show that the tenets of strong AI have not yet begun to uncover what cognition is or how it happens, despite confidence otherwise. He wrote: ”mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer”, illustrating the idea that formal programs lack the causal power that cognition maintains. I found this point to be particularly fascinating in conjunction with Searle’s issues with the Turing test as a measure of success. The inadequacy of the Turing Test according to the CRA is that simulation is not equivalent to understanding, and simulations can be made strong enough to trick a human participant (or pass the Turing test).

    This was an issue I took with the concept of the Turing test without ever being able to fully vocalize my issue with it. If we create a program that passes the Turing test, can we really say that we will be closer to understanding how cognition happens? Or will we still simply just be looking at the sequence of brain activities?

    ReplyDelete
    Replies
    1. I think that in many ways when we aim to create AI from the activities of the brain it will just result in superficial understandings and models of cognition. To me the most likely way of us creating a truly intelligent entity would be through experimentation (by creating multiple possible models) and testing those to see if they meet the necessary requirements to pass the Turing test and qualify as conscious beings. So in those ways I find it unlikely that we will come closer to understanding consciousness by simply attempting to create a machine that will pass the Turing test (since we won’t be truly understanding but just testing out what works and what doesn’t).

      Delete
  4. When I started reading the article, I thought that the person receiving the rules and symbols in Chinese and producing the output answers was comparable to the English setup. But after reading the rest of the article, I agree with Searle that they are not the same at all. The quote “the computer understanding is not just partial or incomplete; it is zero”, makes a lot of sense to me. In the Chinese room when the person is getting the symbols, they have no knowledge of what they mean and therefore no understanding. This is similar to a computer because no matter the sophistication of the programming and how accurate the outputs are, the computer will never be able to understand the meaning of what it is doing. Searle explains that when the computer puts out the answer “4”, it only knows the symbol, the syntax, of the number rather than the meaning or understanding the question, which is comparable to the man producing the correct output symbols. They both do not understand.

    ReplyDelete
    Replies
    1. I agree that the person doing the Chinese manipulations doesn’t understand Chinese, but I feel like they must understand they are doing some manipulations of Chinese characters according to rules given to them. They must know those are Chinese characters and it feels like something to apply the rules given to them to get the output. I don’t think a machine is quite the same and understands even less than that because there is no proof that a machine knows it is doing manipulations or that it even recognizes the symbols it manipulates as Chinese characters or numbers.

      Delete
  5. “I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories.”
    The first issue I have is: is Searle saying that it’s possible to do language without understanding? That is what the Chinese room thought experiment entails right because in his view it’s possible to operate with language in a way that’s the same as a native speaker (of any language) all the while not explicitly knowing anything about the relevant inputs/outputs – and I don’t really see Searle explaining that point. Secondly, I think Searle actually gets more traction out of Schank’s computer program being able to answer questions about stories without understanding stories. I actually recently read about AI programs (weak) which do this very thing, but interestingly they performed better than people (if interested: https://www.washingtonpost.com/business/economy/ais-ability-to-read-hailed-as-historical-milestone-but-computers-arent-quite-there/2018/01/16/04638f2e-faf6-11e7-a46b-a3614530bd87_story.html?utm_term=.16580b515b7b ). Although one conclusion that can be drawn from Schank’s computer is that comprehension isn’t necessary to pass a comprehension test, this could mean something else: that our “comprehension tests” aren’t actually testing comprehension. In other words, being able to respond correctly more often than not (doesn’t have to be perfect because humans don’t perform perfectly) on a “reading comprehension test” using an uncomprehending program necessarily implies we aren’t testing comprehension. (I also think this intuitively makes sense to anyone who’s taken a reading comp test.) But I do think that this is a qualitatively different situation (i.e. not just a difference of scale/complexity, but a different *kind* of problem entirely) from Searle’s Chinese Room. To perform at human level with a native speaker of a language (which doesn’t mean perfectly once again because misunderstanding and the like does happen), I think understanding would have to be involved.

    ReplyDelete
    Replies
    1. “The first issue I have is: is Searle saying that it’s possible to do language without understanding? That is what the Chinese room thought experiment entails right because in his view it’s possible to operate with language in a way that’s the same as a native speaker (of any language) all the while not explicitly knowing anything about the relevant inputs/outputs – and I don’t really see Searle explaining that point.”

      I think it depends on what you mean by “doing language”. I don’t think Searle is trying to make a point about language specifically. He’s just saying you can manipulate symbols (i.e. compute), without knowing what the symbols mean. If you want to make it less about spoken language, imagine that instead of Chinese, he’s talking about an arbitrary set of computational symbols that nobody except the original author of those symbols knows the meaning of. If you then sub in "computational symbols" for "Chinese", his argument makes just as much sense. Now imagine that the inputs are translated from English into computational symbols before you read them, and that the symbolic outputs you produce are translated into English after you no longer have access to them. In that sense, you can perform computations that look linguistic without understanding the language. On the other hand, if understanding is a necessary component of language, then of course it doesn’t make sense to say you can do language without understanding.

      But I think that’s exactly his point. If I phonetically learn to respond appropriately in Chinese to any given set of phrases, it doesn’t mean I speak the language when I don’t know what any of it means. And I can certainly do that. It would take a lot of practice and memorization. But in theory, if I know that when you say x, I have to say y, I don't have to know the meaning of x and y to respond appropriately. Searle's argument is that, in the same sense, computing the correct outputs based on given inputs doesn’t amount to thinking.

      Delete
  6. I'm a bit curious about the way that Searle addresses McCarthy's argument that "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance." While (to me) McCarthy is making a purely unfounded (and fairly ludicrous) claim, I feel like Searle is too dismissive that AI machines would have no cognition ability just because they may not be able to articulate "understanding" or "intuition" in the way that humans are able to. More specifically, I am curious as to what criteria Searle would put forth when he says that non-human entities (whether that be a thermometer or a human liver) are ineligible candidates for cognition. What makes them ineligible? Is he taking it for granted that our natural inclination is to say that only humans are eligible, simply because we are unable to fully articulate why/how we are able to "understand"?

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. "No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all." Would Chomsky's concept of the language acquisition device which is an instinctive mental capacity that enables children/infants to acquire and produce language , not be an explanation of understanding language via a formal program?

    "The information in the Chinese case is solely in the eyes of the programmers and the interpreters, and there is nothing to prevent them from treating the input and output of my digestive organs as information if they so desire" This was a clever argument against the system reply argument, as it showed having inputs and outputs and a program in the middle to transition between the two states does not indicate any sort of true understanding.

    "I am receiving 'information' from the robot's 'perceptual' apparatus, and I am giving out 'instructions' to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on." What proof does Searle have that the human homunculus does understand what's going on? Perhaps the way the body works is that if you get an injury for example, than the skin cells understand there is an injury and relay to the sensorimotor cortex (which makes up the homunculus) to stimulate a certain brain region to release endorphins to reduce pain, but the homunculus just understands the command to release endorphins and has no idea that there is an injury? Than in that case Searles argument that all he is doing is formal symbol manipulation without knowing the other facts would not be a sufficient argument against the Robot reply. I would appreciate any insight on this

    ReplyDelete
  9. On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested though certainly not demonstrated-- by the example is that the computer program is simply irrelevant to my understanding of the story. In the
    Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements.

    I find it interesting that Searle is so quick to dismiss outright the possibility that the computer program has anything to do with understanding. I agree that programs do not “understand” in the way in which I understand but a program can help someone to understand/ be the starting point for gaining understanding. Languages are a good example of something that can be learnt via a formalization of symbols and grammatical rules that tell one how to use the symbols. There simply has to be meaning/context given to the symbols from outside the system. For example, when learning French as a child, I learnt all of the letters of the alphabet and how to put them together to spell. Once I had the ability to form symbol strings, I was taught to attach context to them. If we were able somehow to give computers a way to understand context and meaning, there would be no difference in the way in which a Chinese room and I learn Chinese. I think that Searle has the right idea by being suspicious of Strong AI’s claims because we cannot, as of yet, give computer programs the context needed to “understand” in the human sense of the word. However, we should not be so quick to dismiss the idea of computation as a part of cognition.

    ReplyDelete
  10. In his article, Searle comments that AI has become an attractive way to reproduce and explain cognition since machines can be made to do some of the things we are able to do. People believe that when a computer system is simulating features of some situation it is ideally being done using the same program that we would be using and thus the information processing is occurring via the same mental processes for both us and the computer. In reality, simulations do not reflect reality, as discussed in class, a computer simulation of a vacuum cleaner cannot clean dust off the floor in the real world it can only do so within the simulation itself. If a computer is simulating one of our own mental processes is it safe to say that whatever it is simulating never can apply to the real world just as in the vacuum cleaner example? Attributing mental states to a computer simply because it can do the same cognitive tasks as we do is very tempting but Searle’s insistence on the necessity of intentionality drives one away from the temptation as this is the difference between what we do and what a machine does. Searle comments that intentionality is likely “causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena” Although I’m skeptical about the claim that intentionality comes about in a manner similar to the above mentioned processes in that it is in some way created like milk or sugars in an organism, but I do agree with his notion that people are more likely to believe that it is possible for a machine to develop real intentionality through simulation alone while no one would ever think that a computer program which simulates lactation could produce real milk. I guess when cognition is concerned the power of a simulation could transcend the simulation itself, in people’s perceptions that is.

    ReplyDelete
    Replies
    1. I agree with your argument, and building off of it, I'd like to point out Searle's argument that we make sense of behavior in terms of intentionality. I found this argument to be very interesting, because it is true that when we see/feel something occur (such as our stomach grumbling or performing arithmetic on a calculator) we attribute causality or intentionality where there isn't any. Our stomach's don't have minds with intention to signal to us that we are hungry -- it is simply a chemical process. It's interesting to think of organs, and how our brain is one too but holds so much more abstract power. And, by this, it makes sense that if we say a robot acting indistinguishably from a human (passing the Turing test), we would attribute intentionality to its actions although these actions could be carried out simply through formal symbol manipulation. Overall, I agreed with most of what Searle had to say regarding his lack of support for computationalism.

      Delete
  11. When Searle deconstructs argument 3, in which he says just simulating neurons firing at the synapse doesn't automatically create understanding. He gives the example of pipes being manipulated by the person in the Chinese room to further prove his point about how firing at a synapse wouldn't change the machines level of understanding since you still can't be understanding Chinese after doing all the right firing or turning on all the right faucets. However, later in his paper he discusses how a man-made machine COULD possibly think if the machine would have the same nervous system as us with axons and dendrites. Does this not contradict his argument about how simply having synapses firing doesn't create thinking?

    Searle also discusses how in the case of strong AI, there would be no incomplete or partial understanding on the computers part. Is this not an extremist point of view as computers can still understand the script/instructions given to them and (change line, overwrite, stop). In the Chinese room example, the individual still needs to understand that they have to manipulate uninterpreted formal symbols and there still requires some level of understanding in terms of performing the task itself.

    ReplyDelete
    Replies
    1. Your point about Searle’s reply to the brain simulator reply made me reflect more deeply about the role of neurons and how it may also relate to argument 2, the robot reply. Searle refutes the robot reply by putting himself into the robot such that he is the robot’s homunculus. He argues that although he is “receiving “information” from the robot’s “perceptual” apparatus, and I am giving out “instructions”…I don’t know what’s going on.”

      But think about how our brains work. Our brains are made up of millions of neurons that receives inputs, performs computations, and then sends outputs in the form of neurotransmitters from its synapses. In other words, neurons perform just as the Searle homunculus does – they are performing formal symbol manipulation with no idea of what’s happening in the world. Brains don’t have intentionality or understanding at the neuron level, but we as humans do, because we are made up of more than the brain. On the same vein, can we say that just because the Searle homunculus has no clue whatsoever about what’s going on, this doesn’t mean that the lack of understanding extends to the robot?

      Delete
  12. Searle, in his paper, took advantage of one of the fundamental definitions of computation to attack computationalism---implementation independence. If cognition is purely computation, then he would use his own cognition to explain one thing, that is computation alone does not achieve understanding, by using the Chinese room argument. Later, the concept of embodied cognition emerges from Searle's pioneer findings: The meanings of symbols must be grounded in external reality and the acquisition of symbols must be achieved by sensorimotor categorization. Only through combining computational execution and dynamic interaction with the environment one can truly achieve cognition and the possibility of passing T2 level Turing test. I am wondering about the amount and range of symbols that need to be grounded by sensorimotor categorization. Do all the symbols in our symbol representation system need outside sources as references, or we only need a certain amount of symbols as our fundaments and all other concepts will be derived from it? Moreover, this leads to the in depth contemplation of our linguistic systems; namely if the general meanings of two fundamental words in two separately linguistic systems vaguely point to the same categorizations, but with certain level of nuances, will the complex representations that deride from them afterwards in these two separate language systems achieve different realities for them? Will it become the reason for why misunderstanding when communication happens?
    Meanwhile, another puzzling question still lingers to be solved: will embodied cognition become the final explanation for the hard problems? If we achieve the feeling of understanding something, does the feelings of everything else arise with it? So far, it seems to be the case.

    ReplyDelete
  13. I agree with Searle’s argument that computation, which is formal symbol manipulation, is not cognition. He proves his point by giving an example of him as the the hardware that executes the algorithm in Chinese but ultimately does not understand Chinese. In this sense, the Chinese-T2 passing the Turing Test is not decisive - refuting one of the propositions of Strong AI. However, I think that Searle’s view was extreme in the sense that he prematurely dismisses the thought that the computer program is not part of cognition at all.

    The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."

    In the CR experiment, Searle doesn’t understand Chinese because he is only part of a system. But what he fails to recognize is that computation is still necessary for understanding -- it just does not encompass all of the processes that go into understanding. Computation is then part of the whole system for one to be able to comprehend sentences. This raises the question of what the computer is missing to be able to understand. An answer that Searle proposes later in the paper is that the machine may be missing some sort of biological structure that deprives it of being able to be intentional, but it seems unlikely that studying these biological processes will give us the answer as to how we understand.

    ReplyDelete
  14. I would like to address Searle’s answer to the Robot Reply (Yale). Searle says that “the addition of such ‘perceptual’ and ‘motor’ capacities adds nothing by way of understanding […]” I believe that this statement is incorrect. First, as he discussed it himself, there are many levels of understanding. To understand something is not an all-or-none phenomenon. Thus, adding sensorimotor capacities would increase the understanding perhaps to a deeper level. Getting feedback from the environment and being able to adapt to it brings the machine closer to a full understanding. Second, these “levels” of understanding should be identified, which brings me to a more fundamental question: what defines understanding? Do you really need to feel to understand? If we do, then maybe a stomach has some understanding abilities since it senses changes in acidity, for example. If not, then the robot could have some understanding capacities. It just would not have a full-blown understanding of all modalities of cognition which include feeling.

    ReplyDelete
    Replies
    1. It was my understanding that Searle was trying to explain that without intentionality, the addition of these perceptual and motor capabilities are ireelevant. Even though the machine would then be able to move around, eat, drink etc, these are simply outputs of its internal programming and wiring. Without intentionality, these abilities do not add any further understanding.

      Delete
  15. I found Searle's response to the robot reply disappointing. I agree with his assessment that if the visual apparatus is only used to show him the Chinese symbols, then this doesn't allow any more understanding than the original thought experiment. However this interpretation of the robot reply seems limited, and not in the spirit of the argument. Granted I may be misrepresenting the reply myself, but consider the following excerpt:

    "this computer would not just take in formal symbols as input and give out
    formal symbols as output, but rather would actually operate the robot in such a way that the robot does
    something very much like perceiving, walking, moving about, hammering nails, eating drinking -- anything you
    like."

    It seems as if the authors did not intend for the robot to only look at the Chinese characters. Searle handicaps the argument by not granting it the full range of perception that is described. Surely we wouldn't expect a child raised in a state of sensory deprivation to understand anything, so why should it be different for the robot? If, however, we consider a scenario where the robot is able to have sensory feedback from it's environment while being presented with the Chinese characters, it seems more likely that the robot would be able to make connections and do something more like understanding. This interpretation would still concede that there is more to cognition than computation, which I think he should have focused his response around rather than considering a weakened version of the argument.

    ReplyDelete
    Replies
    1. Do you think it is necessary for the robot to have sensory feedback in order to understand? Or is understanding possible through learning and just input feedback?

      I think about Siri as I ask these questions, because Siri is able to learn by the input we provide it, google is able to learn by past searches, etc. but neither receive sensory feedback. So is there learning without understanding? Or is what they are doing not learning?

      Delete
  16. “But first I want to block some common misunderstandings about "understanding": in many of these discussions one finds a lot of fancy footwork about the word "understanding." My critics point out that there are many different degrees of understanding; that "understanding" is not a simple two-place predicate; that there are even different kinds and levels of understanding, and often the law of excluded middle doesn’t even apply in a straightforward way to statements of the form "x understands y; that in many cases it is a matter for decision and not a simple matter of fact whether x understands y; and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the points at issue. There are clear cases in which "understanding' literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument 2 I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute "under standing" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine knows how) (understands how to, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature."

    What does “understanding” entail, then? If we know we are understanding because we feel as if we are understanding, this necessarily implies there are degrees of understanding. I can’t think of one example of someone truly reaching the most perfect understanding of something because there’s always more to know, i.e., the history behind something, the counter-arguments, etc. Isn’t the only thing preventing Searle from “understanding” Chinese is that he doesn’t know the referents to the real world. And aren’t those somewhat arbitrary a product of our social history. I understand stores in English because I’m a native English speaker, so my repertoire of referents is much more extensive than mine for French, and even less for that of German, and essentially non-existent for that of Chinese. We attribute understanding by metaphor and analogy because we use our existing knowledge of one object/idea and apply it to a novel one.

    ReplyDelete
  17. I find that Searle's Chinese room argument, although possible and worth considering, is a little impervious towards the possibilities of machine learning. Particularly when he says "why on earth would anyone suppose that a computer simulation of understanding actually understood something?" Why couldn't the machine understand what it is simulating?
    Searle goes on explaining that the symbols the computer is reading are just that and that no understanding is necessary. But couldn't you say that a machine that is able to learn on its past computation (say an algorithm like the Metropolis-adjusted Langevin) has to understand what it has just computed for it to learn upon it?

    ReplyDelete
    Replies
    1. For the second question, I think that (according to Searle), we can’t make the claim that the machine is learning anything from its past computations because that would assume that the machine has “understood” the inputs and outputs, instead of just manipulating the symbols. I think that the machine could get faster at computing as it gets better at mastering the rules to the computation, but I don’t think that is equivalent to learning. We could compare it to us being in the room with all of the Chinese symbols (given that neither of us speak Chinese). The more and more that we perform the task (to which the rules were given in English), the more and more efficient we will be at it. However, this doesn’t mean that either one of us really “understand” any more of Chinese, just that we have mastered the symbol manipulation. By "understand" here, I mean the actual meaning of the symbols that we are manipulating.

      You also asked “Why couldn’t the machine understand what it is simulating” to which I’m also curious. I understand that according to Searle, it is literally just performing a formal symbol manipulation, the same way that we could produce squoggles from squiggles in Chinese, but I’m not sure how we could know if the machine “understood” what it was doing. It seems like even if we asked the machine and it replied that it did, its answer would not really be valid as we don’t know what’s going on.

      Delete
  18. “I am receiving “information” from the robot’s “perceptual” apparatus, and I am giving out “instructions to its motor apparatus without knowing either of these facts. I am the robot’s homunculus, but unlike the traditional homunculus, I don’t know what’s going on.”

    I’m not super satisfied with this response to the robot argument from Searle because I don’t get how his argument differs from what humans do. Don’t humans just receive information from our perceptual apparatus and give out instructions about how to respond without really knowing “how” we are doing it? We talked about how introspection is frivolous because we can’t really know how we are doing what we are doing. In addition, we talked about humans having these homunculi that are essentially finding relevant pieces of information and linking them, but we are still clueless to what is occurring within the “head of the homunculus”. Searle is essentially saying when we are in the Chinese room, we are acting like the robot's homunculus and just performing formal symbol manipulation, but is there a reason to believe that there is not a similar system occurring in humans?

    ReplyDelete
    Replies
    1. Hi Maria. The whole homunculus idea can be hard to interpret for sure. I think what Searle is asserting is: whether or not there is a tiny human in the head of the human or robot in the Chinese room, that homunculus would still not understand Chinese. All the homunculus is doing is taking in information from the perceptual system of the machine in the Chinese room and outputting a motor response, simple formal symbol manipulation. On the other hand, in the classic sense, the homunculus in our brains not only takes in and spits out information, but it understands this information. This difference here would be the homunculus in the Chinese room is manipulating symbols based on formal rules, computation. Alternately, the homunculus in our minds still manipulates information, but it attaches meaning to these bits and understands them.

      Delete
  19. Searle’s paper discussing strong and weak A.I, Searle states that strong AI claims to be able to “understand” and have multiple cognitive states. He mentions that word “understanding” a further 40 times in his paper. Yet when reading through his work, I was confused by his clarification about understanding. Keeping up with the tradition we have in class of trying to define what a large term means, I think I can propose something that would help define Searle’s understanding. I believe that understanding in this context means a few things. The first among them would be able to interpret the information given and then be able to organize this information into a format that can then be used to answer questions on the information. However what sets understanding apart from knowing (which would be regurgitation of information-based set of on a rules, as you would see in a calculator or even in Searle’s Chinese Room thought experiment), is the fact that you can re-explain it in different language, or even reduce it and keep the core message. This would apply because a calculator cannot explain why 2+2 = 4, it just knows that it is, there’s knowledge without understanding. I hope my reasoning when trying to explain understanding might help in clearing things up (but I’m not so sure, so feedback would be greatly appreciated).

    ReplyDelete
    Replies
    1. To expand upon your definition of understanding, I would like to propose that “understanding" must also include the ability to alter algorithms based on new or confounding information. Suppose we return to the Chinese room argument. The participant performing calculations of the form "squiggle is followed by squoggle” is given an additional bank of Chinese characters that were not previously given to them. The participant would not be able to integrate these new Chinese characters into their response, because they [the characters] are not accompanied by any rules for how to implement them. Thus, even though the participant is able to produce outputs with the Chinese characters he was previously given, he cannot integrate the new ones into his data bank and alter/expand upon the calculation rules.

      Consider a second participant who is learning Chinese as a second language and already has a rudimentary grasp of the grammar and syntax of the language, as well as sufficient vocabulary (both written and spoken). They are given the same initial set of Chinese characters and rules for implementing them as the first participant. However, given the second bank of characters, the second participant is able to integrate some or most of them into their outputs, due to understanding the parts of speech, sentence structure, etc. This second participant understands Chinese because they have successfully integrated new information in order to alter/expand algorithms.

      Delete

  20. “As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding.”
    This section reminded me of the notions of behavioral equivalence. Regardless if one understands Chinese or not, if the inputs and outputs are indistinguishable from a native Chinese speaker, is your understanding, or lack thereof to the language, relevant and essential? Conversely, if a native Chinese speaker and someone using the aforementioned program to spit out proper Chinese output reach the same conclusions, is the process they used to get there important, as long as they reach the same conclusions? Following the definitions of the word “understanding” that are later outlined, if we place the verb “to understand” on a continuum, I think that understanding the English translation of the Chinese input is sufficient to qualify as a lower degree of “understanding”.
    I also found the mentioning of the theory of the mind very interesting. Is Searle is insinuating that theory of the mind is what sets humans apart from machines? Many people with autism struggle with the theory of the mind, in the sense that they are often unable to put themselves in the shoes of another and understand their mental states. In addition, if it is possible to program a computer with the capability to have such mindfulness, what applications could this yield for people with autism?

    ReplyDelete
  21. « the aim of the program is to simulate the human ability to understand stories. It is characteristic of human beings' storyunderstanding capacity that they can answer questions about the story even though the information that they give was never explicitly stated in the story."

    This definition of « understanding  » reminds me of Turing’s notion of "intelligence".They both reduce these notions to a very concrete, behavior like definition, in the sense that they judge the ability to understand and to be intelligent as ways of PRODUCING intelligible and situationally correct behaviors/ answers.
    Yet, it seems to me that understanding is always accompanied by some kind of 'feeling of understanding'. It is one thing to understand something theoretically, to know all the attributes of that thing, etc. as in the example of story telling. Yet, it is something completely other to feel like you understand. When you have a sort of eureka moment, when all kinds of different links and associations are made, and you feel as though you experience the understanding. I believe that there is something it is like for me to understand.

    The essay made me think of the knowledge argument, or "Mary's room" in which Mary, a scientist, is living in a black and white room. She knows and "understands" all there is to theoretically know about colors but she has never had the experience of color. When she steps outside the room and experiences color for the first time, she learns something new, and I would argue that she also gains a new level of understanding. Would Searle agree with this ? If so, does this make him a dualist?

    ReplyDelete
  22. “To do this, they have a -representation" of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories.”
    The point that Searle makes here is very important. Since AI is meant to be conscious technology, they should be able to extract and interpret information from the information that has been provided to them. In the example of the restaurant, the machine would be able to interpret the outcome of whether the burger was eaten based on the behaviors of the person (which is seeding). However in my opinion truly intelligent technology should be able to observe and make their own decisions based on reasoning. (Although it is quite impressive that technology is already able to interpret whether someone had eaten the burger based on whether they were upset or happy with the results.)

    ReplyDelete
  23. “Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room -- telephone, tape recorder, adding machine, electric light switch, -- also have beliefs in this literal sense.”

    This statement against McCarthy’s claim is convincing, and it clears a lot of confusions I had before about strong AI and computationalism. I agree with Searle’s point that human cognition works differently from the “hunk of metal on the wall,” and humans have beliefs while telephone, thermostats and various other machines don’t. However, we wouldn’t expect these machines he mentioned above to pass the Turing Test since their programs, designs, and capacities are fairly simple compared with digital computers or robots. Thus, the argument is right, but it only shows that the lowest level of computational process is not eligible for cognition, while other forms of computation are not considered.

    I also wanted to comment on Searle’s reply to the last objection; he states that “the interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to.” I appreciate this reply because when I was reading through the article, I found that Searle made a lot of assumptions which makes his arguments somehow questionable and invalid. Nevertheless, he is simply proposing the Chinese Room Experiment to oppose the idea of computationalism, and his assumptions are based on previous assumptions made by computationalists. Hence, many critics emphasize on other aspects of mind and computation, rather than accurately on his arguments.

    ReplyDelete
  24. “On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story.” I believe that the part of the story that Searle is alluding to is the actual computation, the application of understanding. In the transcription room, the computer or Searle rather, is simply following the guide lines set before him in their comprehendible language. Searle is right in saying that this computation is far from understanding, that we do not bestow our intentionality in a computer by offering it a rulebook and the appropriate hardware to complete the task at hand. As was pointed out last class, to achieve a level of semantic understanding comparable to that of a human, the computer would have to pass as Turing T3 level of hierarchy. Building a computer with robotic sensory transducers and motor components would harbour the interaction and formation of an understanding of the semantics behind the language we use. This is a necessary step in understanding language, but even then, may only be part of the story.

    ReplyDelete
  25. My favorite part of this paper was Searle's discussion of dualism and how it relates to Strong AI. I personally overlooked the implication that Strong AI requires a belief that mind and brain are separate. Based on the current status of neuroscience we have little evidence to back up dualism. The evidence all points to the fact that the mind is a byproduct of brain activity. Thus, dualism is not a viable explanation for the mind/brain connection. Therefore, trying to separate the mind from brain through creating a computer program seems like it's a dead end.

    This ties in to Searle's motif of programs vs machines. I think that his belief that we could make a machine than could think, but not a program that could think has to do with rejecting dualism and working from the thought that there can be no mind without a brain.

    ReplyDelete
  26. According to strong AI, a programmed computer can be considered equivalent to a mind as “instantiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality” (Searle). Searle tries to refute this claim, using the Chinese Room Experiment to display that instantiation of such a program is NOT sufficient for intentionality; rather a program must possess causal powers equal to those of human brains.

    I agree with Searle that neither a computer program nor the Chinese room are truly understanding and cognizing, and thus lack intentionality, however a part of me remains caught up on the other mind’s problem. We only know that other humans possess intentionality/cognition from our behavioural outputs, so therefore if a computer program can also pass such behavioural tests, shouldn’t we attribute intentionality/cognition in the same way. How do we know any other human possesses intentionality aside from their behaviour? Therefore, in the same way, how can we attribute intentionality to humans by means of behavioural output and not to computers?

    ReplyDelete
  27. I thought that Searle's Chinese Room argument was interesting because it basically debunked the theory of computationalism that had been so prominent because of Turing. I furthermore thought that his distinction between strong AI and weak AI illuminated some good points about machine computers and whether they are capable of qualia or if they are just zombies. The fact that it actually feels like something to understand another language is what, to me, showed that even a strong AI computer is incapable of thinking. I also think that it is interesting that some people like to argue that the person in the Chinese Room would eventually “learn” the Chinese language but fail to realize that the person in the Chinese Room have never actually been given the translational equivalents for what any of the symbols mean. How could the person working in the Chinese Room even begin to actually understand Chinese without such a translational dictionary?

    ReplyDelete
  28. "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" I really like how Searle specified that this was the right question to ask. In agreement with his answer being no, I think its important to realize that understanding is different than just inserting an input and receiving an output. Like he said, it is meaningless because its just syntax and no meaning is being extracted from that. Yes, although we receive the correct output, its still important to try and understand how that output has been reached. It would be similar to answering a math question with an answer without showing how that answer was derived. This could be also important if (I'm not sure if this would happen)the program would interpret the wrong output and therefore, actually understanding it could help resolve the problem.

    ReplyDelete
  29. Part of why Searle does not understand Chinese is because the Chinese symbols are not grounded. Consider that since Chinese is such an old symbolic language it has many characters that could feasibly be grounded by humans by transferring grounding from one language to another. For example, the Chinese character for ‘horse’ is one of the more literal symbols in the Chinese language (along with ‘sun’ and ‘moon’). What I mean is that the Chinese horse character looks horse-like, and a human with knowledge of horses and how languages are developed might deduct that said character carried that meaning. If Searle could ground some Chinese characters from his understanding of English and knowledge of the world (that would allow him to abstract the pictoral meaning in symbols), is he transferring his symbol grounding or creating a new one? And if he is able to transfer said grounding, in theory if we could transfer symbol grounding to a binary computer, could it pass T2 without needing to be T3?

    ReplyDelete

  30. While the individual in the systems reply does not understand Chinese, the system as a whole does understand Chinese. Searle proposes in response that letting an individual learn and, additionally, internalize the system will still not be sufficient enough to affirm his understanding of Chinese. This is congruent with the idea that symbol manipulation (computation, in a basic sense) alone is not enough to learn and comprehend Chinese, however, the proponents for Strong AI would say that English is just a more formal method of symbol manipulation. With this in mind, how does formal symbol manipulation underlie the understanding of English? From a mechanistic perspective, how is symbol manipulation an effective mechanism for an English speaker but ineffective when this same speaker is introduced to Chinese?

    ReplyDelete
    Replies
    1. My understanding is that symbols in the Chinese case are meaningless since Searle does not speak or understand Chinese, while the English symbols are meaningful to him because he does understand English, thus it's not purely symbol manipulation for him to give outputs in English, there are cognitive processes involved for an English speaker to give the answers in English.

      Delete
  31. Regardless if we learn all the rules to the Chinese Room Experiment, we still would not be understanding Chinese, rather, we would be able to produce a correct output. This is exactly what is happening in the Turing Test; the correct outputs are being produced sans the cognitive intentions of a mind. We know that we are understanding something because it feels like something to understand, and this is why it would be wrong to conclude that even Strong AI is capable of thinking.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  32. Searle discusses how within a person the English subsystems knows that hamburgers refer to hamburgers, the chinese ones only knows that squiggle is followed by squoggle, in general driving home the point that just like not understanding Chinese even though being able to manipulate it through rules, computers cannot "understand" even though they can manipulate through programs. But how exactly do we know that "hamburgers" actually represents hamburgers? It seems like it is through imagery and memory, such as picturing a juicy hamburger and/or remembering a hamburger you ate a week ago. So could one of the steps to actually bringing the computer closer to "understanding" be giving it these "memories" and image associations of a certain word? For example we could write a program that assigns "hamburger" a meaning encoded in hundreds of first person videos of hamburgers and experiences with hamburgers (sounds ridiculous), would this bring computers any closer to "understanding" rather than just manipulating squiggles and squoggles? My question really is how does human "understanding" fundamentally differ from computer non-understanding, and is this difference able to be converted from an abstract concept to a attainable program of some sort.

    ReplyDelete
  33. “It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality.”

    In this passage, Searle calls perception, action, understanding and learning “causal powers” which are necessary for intentionality according to him. I wonder what Searle would say about intentionality in the context of machine learning: because we are now able to program machines that can learn, have we successfully created machines with intentionality? Indeed, these machines can extract similarities and differences between sets of data and draw conclusions, so in a sense they have “causal powers”. On the other hand, they perform these tasks because have been programmed by humans, organisms who have intentionality. Therefore, I am unsure whether or not Searle would retain his claim that learning shows intentionality or revise his definition of intentionality.

    ReplyDelete
  34. “The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neurons firing at the synapses, it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.”
    I don’t fully understand what Searle means by causal properties of the brain. From my understanding, he has a entirely physicalist view of the mind/brain. He suggests that intentionality and understanding, which are vital to his definition of cognition, are fully dependent on their biochemical origins (i.e. not implementation independent). He is critical of the behaviouralist (and dualist) approach to strong AI (computationalism), exemplified by the Turing Test. He believes that the mental states are dependent on physical/chemical properties of the brain. He does concede that intentionality might have multiple realizabilities, but we cannot achieve an explanation of them through computation alone. How could causal properties be explained in a kid-sib friendly way? As I understand it, causal powers means the power to cause something. So in the brain, lower level processes at the molecular level cause higher level changes that result in consciousness/feelings. What’s the difference between a causal system and formal system? Aren’t they both kind of “if this then that” based, that could contain levels of abstraction? Although formal systems are based solely on syntax (shape), I don’t think it can be said that causal systems are based on semantics any more than formal ones can. In the above quote, Searle refers to firing neurons as formal structures of the brain. What else is there in the brain other than neurons, synapses, neurotransmitters, etc? I believe we’re using the word formal here as in “being manipulated by their form/shape”, so which of these properties can be said to NOT be formal? (Charge, perhaps?) Reactions in the brain don’t occur formally, they occur by nature of charge gradients, molecular interactions, etc. But if they can be simulated (explained by rules of chemistry), they can be formally simulated, right? Is he suggesting there’s some non physical attribute that’s unique to the brain? Doesn’t that contradict his argument against the belief that the mind is NOT independent of the brain/some non physical entity? OR is he suggesting that a simulation in which brain chemistry is translated to formal “representation” loses some of it’s …essence? Like having a labelled drawing of a neuron instead of real one, a formal neuron CANNOT encapsulate everything there is about that neuron? (Much like Mary’s room: you can know all the quantifiable/explicable details of the colour red, but upon seeing the colour for the first time, you would still learn something new). From Searle’s argument, I don’t understand what causal properties in the brain are, and how these properties allow it to produce intentional states.
    I was confused about Searle’s reply to the robot argument. From class, I understood that sensorimotor processes were more than just formal symbol manipulations, but Searle seems to imply it’s more of the same? Suppose we build a robot that is behaviourally indistinguishable from humans (it passes T3), and we also create a machine that has the same causal power as a brain, Searle would say the former cannot have understanding while the latter can. The difference between the two could be described as The Hard Problem, but how do we assign agency (understanding) to one but not the other (especially if they’re indistinguishable) without resorting to the explanation that one achieves its behaviour through causal properties, and the other through formal?

    ReplyDelete
    Replies
    1. A few more thoughts:
      “We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses and our children have beliefs,”
      The McCarthy quote he’s responding to is absurd, so I understand his hyperbolic response here, but I disagree with the argument he’s making here. He says the belief that a machine could think would be akin to believing that the thermostat on your wall has beliefs. I would revise that statement to say that you’d have to believe that the metal on your wall has the CAPACITY, if arranged in a certain way, to hold beliefs. In the same way, you could see in the genetic makeup of a liver, the capacity for its host to have also developed a brain. Using the “unthinkingness” of the thermostat to exemplify the impossibility for strong AI to hold beliefs is akin to using the liver as an example to disprove the ability of a brain to think. We’re not comparing brains to thermostats here, we’re comparing brains to strong AI…
      On confusing simulation with duplication: assisted by shows like Black Mirror, in which simulations are anthropomorphized for the visual medium of tv, I’m also guilty of having assigned, in the past, intentionality to simulations. Searle is baffled that people would believe such things, especially when they could see, for example, that a simulated vacuum cleaner wouldn’t be able to vacuum any real dust. So why should simulated thought have any correspondence to real thought? The problem lies, I believe, in this abstract idea we have of what it means to think and feel, that probably has deep roots in cartesian dualism. When one says simulated feelings aren’t real feelings, I don’t have a tangible understanding of what feelings are, so since both seem to me to just be “out there” somewhere, they both seemed plausible. This might be an abridged version of why rational people, who understand the limitations of a simulated vacuum cleaner, wouldn’t hesitate to accept the validity of a simulated thought.

      Delete
  35. For me, Searle's argument completely destroys the theory of computationalism. He brings up this concept of "intentionality" multiple times, which he claims is what sets human brains aside from other machines. This intentionality is the causal mechanism by which humans interact with their environment as well as cognate, and no formal model based on meaningless symbol manipulation will ever be sufficient for intentionality. Cognition is not simply computation - there is much more than inputs, outputs, and symbol manipulation going on in the human brain that allows us to do everything we can do, and more pertinent to this argument, understand and learn, as demonstrated by the Chinese room example. The one aspect of his argument that stood out to me was when he compared the inputs and outputs of Chinese to the stomach, claiming that neither systems have information. If information is the reduction of uncertainty, he is correct in this claim, because the formal symbol manipulations aren't reducing uncertainty, they are simply giving an output according to arbitrary rules. To the person interacting with the system, this output may have a meaning, but the system itself understands nothing and thus cannot have uncertainty further reduced. I enjoyed how he brought up common sayings that are simply wrong, such as "this calculator knows how to…", as the calculator does not understand nor does it "know" anything, it is simply running a program and doing computation. This demonstrates the power of language and how these sayings have contributed to the arguments of proponents of strong AI.

    ReplyDelete
  36. Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
    Here Searle’s conclusion that since T2 is passed by a purely computational device, would not be cognising (does not understand Chinese) is a strong argument for why computation falls short of cognition since it disproves the premise that cognition is just computation, thereby rejecting all other premises and the whole notion.
    “The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.”
    It is also interesting how he managed to explain it in such a relatable way using human language output, in that we can meaninglessly manipulate symbols too with the absence of any cognition happening. Even if there is a subsystem for Chinese, it remains meaningless symbol manipulation according to rules in English because that is what the human understands. Despite having the capacity for intentionality, it doesn’t mean any of it is employed by memorizing the program because doing so does not teach the human to actually understand Chinese in this case. As such, a computer can learn the program but that does not equate to having mental state as Searle says “the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program”. But then he says the “brain is irrelevant” which has me incredibly confused because where in the world do mental states go then?
    His individual response to several objections also serves to strengthen his argument and one that always crops up with any question in cognitive science of the other mind’s problem he refutes very easily by saying that is not what we are concerned with. Searle is primarily concerned with disproving computationalism and what we attribute to a mental state rather than how we know we have one.
    However, I am not convinced by his thoughts on the source of intentionality being “causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena” because I don’t think it’s as simple as physiological and chemical mechanisms. He doesn’t really elaborate on such a sweeping claim but I can see how people are more likely to think a computer simulating mental activity could have intentionality than believing a computer can actually produce sugar if it can metabolise. Perhaps because we ourselves are unsure of how intentionality occurs in us and thus more easily jump to make associations to try seek understanding but whether that is correct or not is a different matter.

    ReplyDelete
  37. One of the points I find interesting is that Searle believes he is saying that cognition is not computation at all. He doesn’t succeed with this belief, and moreover shows that cognition isn’t just computation. How much of it is computation? That is simply answered by the symbol-grounding problem. If computation was the final truth, you’d only need to pass the T2, but since cognition isn’t computationalism, you need to pass T3. The chinese room argument premise is strong AI (where cognition is computation), computation is implementation-independent, and T2 is decisive. The squiggle and squaggle rules are based on their shape, not meaning. The differences between T3 and T4 is that there are many different ways a robot could pass T3 as long as it is hybrid and dynamic. If you’re looking at T4, a robot would pass by having a synthetic brain. In T5, the robot passes by having an actual brain, exactly the same as a human’s brain.

    Another interesting point to discuss is if we had a computer program that passes T2 in Chinese, and is able to interact in Chinese with pen pals (it’s Turing Test is Chinese). If Searle becomes that hardware by taking the program and executing squiggle and squaggle manipulation rules, then under those circumstances, he isn’t really understanding Chinese, and neither was the computer after all. This shows us that cognition is not computation. He used feeling to show that it couldn’t JUST be computation, and Turing said not to worry about “feeling.” Ultimately, Searle made a mistake when introducing it as a “Chinese room” with symbols on the walls. This encourages people to believe that he’s just a part of the system, and not the entire thing.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...