Saturday, January 6, 2018

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

50 comments:

  1. “The force of his CRA depends completely on understanding’s being a conscious mental state, one whose presence or absence one can consciously and hence truthfully ascertain and attest to”

    This paper helped illuminate some of the complexities that I found myself struggling with when reading Searle’s papers. Although many of his rebuttals were quite clear, his verbiage was sometimes confounding. Through the reformulation of the tenets of against which Searle was arguing into those of computationalism, the main ideas of his paper were much clearer. Nonetheless, it also brought up some things that I am not fully understanding.

    Before finishing this paper, I hadn’t thought about the idea of conscious versus unconscious understanding. Although it is clear to me that we can never understand another entity’s mental state with Cartesian certainty, I am still not sure I understand Searle’s periscope. The paper states that Searle’s periscope would either require equating computationalism to implementationalism to save the mental aspect of computation, or it would require giving up and saying that computation is not mental. How does this fit into understanding? What does it teach us about the hard problem?

    ReplyDelete
  2. There's one thing puzzling me that I'd like to ask about.

    Stevan says that a computer can't pass the linguistic Turing test T2 without also passing T3. Yet Stevan accepts Searle's supposition there is such a thing as a Chinese algorithm that we could blindly follow to perfectly answer Chinese questions without really "understanding" our answers.

    Isn't this the very definition of passing T2? If we suppose that such an algorithm exists and that it fits on a computer (or a book for the human to read from), then isn't accepting this claim the very same thing as accepting that a computer can pass T2 without passing T3?

    ReplyDelete
    Replies
    1. I also am confused about the same thing. There already exist computers that can pass T2 but not T3. For example, have you ever heard of CleverBot? Google it if you haven’t. It’s an online chat-bot that you can pass messages to and it will respond and have a conversation with you. Its responses are not pre-programmed, as it learns through the thousands and thousands of text conversations it has with people. It is a strong candidate for passing T2, but it certainly doesn’t pass T3 as it doesn’t have a body.

      Delete
    2. I don’t think Cleverbot would be a great example. Cleverbot is more T1 – what Steven calls a toy. The “bot” is more of a novelty and cannot hold a conversation so much as it can give shallow answers and deflect. Remembering that Steven likened a T2 passing machine to a pen pal you could have had for years and years that was indistinguishable from a real person. So, while Cleverbot is a funny toy, I wouldn’t say it is a strong candidate for T2.
      That being said, I am also very interested in how it makes sense to accept that this Chinese language algorithm could exist and be T2 passing while not at all T3 passing.

      Delete
  3. I think comparing the Turing Test to D3 is not an accurate statement. The D3 duck is indistinguishable from the normal duck in function, but not in structure. The Turing Test is not functionally indistinguishable from a human in countless ways; for example it cannot have feelings, make conversation, or move around. If this direct comparison is made, I think it is an inaccurate one, unless it is explicitly stated that possessing some and not all of functions are enough.

    ReplyDelete
    Replies
    1. I think T3, at least by the way we defined it in lecture, does include behavioral functioning. So while we wouldn't be able to establish "feelings" per se, a machine (or duck, I guess) that passed T3 would absolutely be able to do performance-based actions like conversation (it can already do this in T2, no?) and movements! The difference (and where your claim of inaccuracy is definitely on point) is that Turing defined his test as T2, whereas a duck or any kind of machine that would be "indistinguishable" (the real Turing-passer) would have to be on the level of T3. It really depends on what you take to be the level at which a machine passes the Turing test (T2 vs T3, and some argue even T4).

      Delete
    2. Even if you were to argue that a machine doesn't pass the Turing test unless it is at the level of T4, we still would not be able to establish feelings. This goes back to the 'other minds' problem, where although the machine may reach T4 indistinguishability, we still have no formal way to prove thoughts/feelings of the machine.

      Delete
    3. Oh of course! But that's getting into the hard problem, which can't even be solved even by going past T5, to the level that we as humans are at.

      Delete
  4. "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that T2 is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details DO matter after all, and that the computer's is the "right" kind of implementation, whereas Searle's is the "wrong" kind. This just amounts to conceding that tenet (2) is false after all."
    Is that actually a problem for computationalism? If we stipulate that the implementation does matter then those who say that “the brain is irrelevant” would be excluded but was that a viable position (tenet 2) for computationalists to begin with? At the very least I would imagine that it (mental states/the whole of cognition) would need to be implemented on something with the property of universality – so computers and gears&levers are candidates but chairs and steam engines are not.

    ReplyDelete
    Replies
    1. The problem I see with this line of reasoning is that once you concede that implementation is important, the human brain (which we know implements the correct software because we can pass T2-T5) becomes a locus of correct software implementation. If the brain was to be thought of as a machine capable of running computational software capable of simulating all cognitive function, it would have to simulate feeling (which we know that computation cannot do because we cannot define feelings via computation). Therefore, having implementation-dependent computationalism would invalidate the theory.

      Delete
  5. "This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking."

    I think this is such an important point! The Turing Test is sufficient for describing cognition (or more accurately, setting markers for qualifications of cognition), but I don't believe that it's necessary for describing cognition; it deals with an idealized human figure (being an idealized machine), which does not necessarily hold up in reality (the example in class of "would blind people pass the TT").

    ReplyDelete
    Replies
    1. I would also like to comment on this passage of the reading. I have to disagree that the TT is sufficient for describing cognition. Cognition includes an ability to learn and acquire knowledge through experience. Saying that passing the TT is sufficient for describing cognition is underdetermination. Saying that we don't ask more of humans is also an underdetermination. We do ask more of humans, we ask to interpret, to react, to being able to change your mind, etc. Could a digital computer change his "mind" on an argument he is having through email with you? How could this be possible since it is following a defined set of rules. In human interactions, although the inputs of a discussion might not change, it is still possible to privately think and change your mind. This is a very natural thing that we expect from human cognition that is not represented in the TT. Thus, the TT it is not sufficient to explain cognition in my opinion.

      Delete
    2. But couldn't you say that a machine that is able to pass the TT, indefinitely too, be a machine that has early levels of cognition and thus be considered to have cognition? Based on your definition, Emmanuelle, if we used patient H.M as an example -- he could not remember anything new after his surgery and thus learn anything new from his experiences. Yet, no one assumed that he had lost his cognition.

      I agree that if you consider cognition to be more than simple understanding and having an email conversation, then the TT is not enough. However, I think it is very possible that a computer could change its mind if a set of rules has been programmed that states "if person says x, don't agree and state y". I don't think most people are underdetermining humans' prowess, but we shouldn't assume that just because a machine has trouble changing its mind that it isn't cognizant.

      Delete
    3. I agree with Emmanuelle in claiming that passing the TT is sufficient for describing cognition is an understatement. There is so much more variability in human thought and interaction, as well as different personal characteristics at play. However, I also agree with Isaure in that there seem to be different levels of cognition. I think the issue is that passing the TT doesn't encompass the complexity of human cognition properly, and I think that's what Emmanuelle was getting at. While there may be a simulation of more elementary cognition at play, the TT doesn't provide an accurate model for the level of complexity humans are at.

      Delete
    4. Isaure, I would argue that patient H.M. has "lost a part" of his cognition. He can still do and feel some things, but the brain function that allowed him to generate new memories (do) and consciously be aware of these memories (feel) is impaired. Moreover, I am sure that if I was to have an email conversation with patient H.M. I would probably not think it's a human i'm talking to on the other end. This is ironic because it would be a real human, but the fact that I could not hold a normal conversation (where you have to recall certain information) with him would make me think otherwise. Thus, is this really a good test?

      Delete
  6. “A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that UNconscious understanding would be, in virtue of the computations.”

    I see how it would be possible for the manipulations to the Chinese characters to be unconscious understanding, but can it be called understanding at all? When you find a phone number by having your fingers find the numbers without you recalling them consciously, can it really be called knowledge? In the CRA, what would unconsciously understanding the manipulations mean? Understanding seems to be a wholly conscious activity, and it seems impossible to understand something unconsciously.

    ReplyDelete
    Replies
    1. Actually, I believe that Harnad and Searle would agree with you here. Surely, the process of gaining our understanding may occur to some degree during an unconscious mental state. But to articulate our understand we must be conscious. In fact, Harnad argues reasonably that this notion of unconscious understandings would only make sense in an entity that has the capacity to be conscious. A conscious entity, again, would still not learn Chinese in the Chinese room. So, "Searle also needs no defense against this revised notion of understanding" (unconsciousness as a mental state.

      Delete
  7. "For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state -- nothing to do with the mind." Did not quite understand Searle's Periscope concept, any clarification?

    "Can we ever experience another entity's mental states directly?" If I could play devil's advocate, I would pose the question, if it is true that software is hardware independent (aka mental states are implementation-independent), than would it not be possible to transfer the mental state of a human being into a "hardware" that can process the mental state much like the brain (ignoring the complexity of designing such a technology, but just from a philosophy pov) and thus capture a humans entire mental state in a computer and thus experience his/her thoughts and perspectives in this manner?

    ReplyDelete
    Replies
    1. I’m also having trouble understanding how we could get into the same computational state as the entity in question without being the entity. You make a good point about the non-feasibility of experiencing another entity’s mental states directly, and I wholeheartedly agree – I don’t think it’s possible. But perhaps this is more of a hypothetical point being made. When we are able to change our computational state (even though it can’t be done today, although perhaps someday it may be possible), we can apply this reverse-engineering methodology to look at the mental states attributed to the computational states. When this can be achieved, it would answer or at least give some insight on the Other Minds problem. In short, we can’t yet enter other computational states, so we do not yet have a solution to the Other Minds problem.

      Delete
  8. Descartes states that one of the only things we can be sure about is the fact that we have mental states and feelings of our own. The other minds problem limits us from going beyond ourselves and into the feelings and states of others. Computationalists believe that some of the mental states occur just “in virtue of being in the right computational state” and if there was some way for us to check and see if mental states result from our being in a certain computational state then we can thus conclude that computationalism is truly what is going on here. I’m a little confused as to what Searle’s periscope would be, but it sounds to me as if the whole concept of it is synonymous with the idea of homumcularity and if that is the case then it is certainly right for systems to protect themselves from utilizing it to explain mental states.

    Both computationalism and Searle’s Chinese Room argument are similar in the sense that they are extreme positions along the spectrum of how we come to explain what cognition is and I do agree that cognition is probably some sort of combination of the two proposals. What I’m wondering about is what kind of hybrid explanation can be used to explain cognition? Would it make sense to postulate two separate systems one computational and one not, working simultaneously and in parallel with each other or perhaps a joint system where one feeds into the other?

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. Is Strong AI right or wrong? After reading Harnad’s response to Searle’s CRA, I can gain a better understanding why Searle adopted such an extreme view that computation does not have to do anything with cognition and that studying neuroscience will give us an answer as to how we understand.

    Consider the reformulated propositions of Strong AI (=computationalism):
    1. Mental states are just implementations of the computer program
    2. Computational states are implementation-independent
    3. There is no stronger empirical test for the presence of mental states than Turing-Indistinguishability, so the Turing Test is the decisive test for a compulationalist theory of mental states

    According to computationalism, if Searle is able to pass T2 in Chinese, then he should be able to understand it. However, he is still not able to understand Chinese, so computationalism must be wrong. Following this rationale, Searle’s argument is logical.

    ---

    Unconscious states in nonconscious entities (like toasters) are no kind of MENTAL state at all.

    The example of us not actually knowing a phone number because we cannot consciously recall it confuses me. If we are still able to dial the phone number via procedural memory, then, although it is unconscious, don’t we still “know” it? And if so, then isn't it a mental state?

    ReplyDelete
    Replies
    1. “It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.” (Steven)

      I am also struggling with this phone number example and the overall notion of conscious vs. unconscious understanding. Searle’s Periscope is that without consciousness there is no understanding as the CRA considers understanding to be a CONSCIOUS mental state.

      But what about sleep talking for example? Isn’t this a demonstration of language we are not conscious of? And don't we still require an understanding/knowledge of the language in order to sleep talk? Therefore, isn’t UNCONSCIOUS understanding of a language possible?

      Delete
  11. Not only is Searle rejecting computationalism, he also rejected the TT as being sufficiently decisive. In doing so, he believes neuroscience is the key to reverse-engineering cognition. In TT terms, this would be T4, that is, reverse-engineering the brain and building a synthetic brain based on our causal mechanisms. Still though, it would be underdetermined, right? This is what Professor Harnad meant by: “There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4).”

    But in terms of structure and function, T4 may be over-determined. Our brains have causal functions that are seemingly unrelated to cognition and for all we know, may be unnecessary for cognition.

    Knowing that Searle jumped a little too far but he has some insightful conclusions with his CRA, are we aiming for something in-between T3 and T4 to reveal what we want to know about cognition?

    ReplyDelete
  12. “Note that there are many ways to reject this premise, but resorting to any of them is tantamount to accepting Searle's conclusion, which is that a T2-passing computer would NOT understand”
    It’s not obvious to me that this less strong statement of Searle’s conclusion would be incorrect. Rather I think the answer lies somewhere in between. It’s in between because a T2-passing computer, while showing that one can simulate a human with indistinguishable input/output, doesn’t tell us much about whether it understands. Understanding in this case being phrased as being able to kid-sib things down and explaining the concept and idea behind what’s being discussed in different language. So I don’t think we can give a definitive it does or does not understand answer just based off on a positive performance on a TT. Instead there has to be a separate test in which understanding of inputted and outputted discussion and information could be evaluated.

    ReplyDelete
  13. “But now Searle brings out his intuition pump, adding that we are to imagine this computer as passing T2 in Chinese; and we are asked to believe (because it is true) that Searle himself does not understand Chinese. It remains only to note that if Searle himself were executing the computer program, he would still not be understanding Chinese. Hence (by (2)) neither would the computer, executing the very same program. Q.E.D. Computationalism is false.”
    From what I understand of this passage it is saying that since Searle is not able to understand Chinese because of the fact that software is hardware independent the computer (the entire system taking part in the Turing test) cannot understand Chinese either. I am a little confused by this since from what I understand the physical symptoms of the hardware do not have a significant impact on its software. However from this reasoning are we meant to interpret that software does have an impact on its hardware?

    ReplyDelete
    Replies
    1. I think by the physical symptoms not having an impact on the software feeds more into the idea that as long as the computer/software arrives at the right conclusion then how you got to that response is insignificant (implementation independent). However, the point i believe being made is that the computer or Searle himself does not understand Chinese and therefore does not understand what he is emailing (since we are discussing T2). Therefore, no matter how he is getting to the right answer, or whatever hardware is being used, doe not necessarily mean the person or computer actually understands the right answers given. When coming to this conclusion, we can say that computationalism is false, or at least partly false since we cannot say that ALL of cognition is computationalism

      Delete
  14. “But now Searle brings out his intuition pump, adding that we are to imagine this computer as passing T2 in Chinese; and we are asked to believe (because it is true) that Searle himself does not understand Chinese. It remains only to note that if Searle himself were executing the computer program, he would still not be understanding Chinese. Hence (by (2)) neither would the computer, executing the very same program. Q.E.D. Computationalism is false.”
    The article helps me understand Searle’s arguments better in terms of his refutations, but I am a bit confused about this passage. I understand that the point here is in contrary to the second tenet of computationalism, thus software is not implementation-independent and “implementation details do matter”. My question is why is Searle, the hardware, executing the computer program, not the other way around?

    ReplyDelete
    Replies
    1. Hi Mavis, so from my understanding this passage is not saying “that implementation details do matter”, but rather proving why they don’t. The second tenet of compulatationaism is that computational states are implementation independent. Therefore, Searle’s manipulation of Chinese (without understanding it) it is a fair way to implement the idea of passing the T2 in Chinese. However, here Searle is saying that although he can manipulate the characters in Chinese to pass the T2, he is no closer to really understanding Chinese because he doesn’t understand the meanings behind the symbols. Hence, a computer executing the same program also wouldn’t be understanding what was going on. So if someone argued that “implementation details do matter” meaning that Searle isn’t the “right” kind of implementation for the program, they would inherently be contradicting themselves, are furthermore, be contradicting computationalism. I think here Searle is inherently the dynamic part of the system because he is what the computation (computer or whatever) is occuring on and not the computation himself.

      Delete
  15.  
    "A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that UN conscious understanding would be, in virtue of computations. This last is not an arbitrary speculation, but a revised notion of understanding. Searle really has no defense against it, because, as we shall see (although he does not explicitly admit it), the force of his CRA depends completely on understanding's being a CONSCIOUS mental state…"
    This was a turning point in the article, and furthermore the ongoing debate about Searle's CRA, in my mind as it explicitly stated what I could not put my finger on while reading Searle's original paper. The notion of conscious understanding is crucial in my interpretation of what Searle is arguing for when he suggests that -- while he may be able to perform the task at hand (whether he is considered to be PART of the system or the system as a WHOLE, depending on which variant of the System's Reply we look at), he is not 'understanding'. Surely understanding in this sense means conscious mental state. Once this becomes clear, it makes understanding what is meant by Searle's Periscope easier to grasp.

    ReplyDelete
  16. I’m confused as to how Searle’s Periscope works. It’s described as a way to understand the notoriously difficult “other minds” problem. However, I don’t understand how it accomplishes this.

    What I’ve grasped so far it that the foundation of Searle’s Periscope's is the property of transitivity. “If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1) being a mental state is just a computational property).” But what is the implication of this? I understood it was that: as a result of the implementation-independence property of computational states, if an implementation is lacking the computational state of the original one we’re modeling after, it’s not a full/proper implementation. This seems rather straightforward to me.
    Further on it’s stated that: “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather nonconscious) state”.

    This confused me even further. How are we supposed to get into the same computational state as the entity we’re examining? What would the process for this be? For example, lets say the mental state of anger results from a specific computational state of the brain that we’ve found through research. How would we replicate this? Through a simulation? Through manipulating our own computational states? I can’t seem to understood this part and I would gratefully appreciate some clarification kid-sib style.

    ReplyDelete
    Replies
    1. A periscope can be described as a device for viewing objects that are in an obstructed field of vision. Searle’s periscope is really just the Chinese Room Argument. It’s a way of exploring whether there is any understanding in a proposed computational state without you having to be the program itself.
      My understanding of the first quote you provided is that computationalism is underdetermined by its implementations. There is more than one way to construct a thing that passes T3 (think if Apple vs. Sony created hardware for a “strong AI” that passed T3). However, the “program” running on each of these machines would be equivalent (going by the idea that cognition=computation is implementation independent), so if you were to say that the Apple AI lacks a certain computational property, they all lack it (and visa versa). For example, if it’s shown that Sony’s AI can be said to have mental states, by their equivalency, Apple’s must have it too.
      The only way to know if the AI feels, is to BE the AI itself, so the barrier in knowing whether it ACTUALLY feels is the other minds problem. In the analogy using the CRA as a periscope, the other minds problem obscures our vision into the mental states of the machine, and the periscope helps us assume its computational states without actually having to physically BE it. By simplifying the process of computation into a story of the man maintaining a conversation in Chinese (a language he does not understand) purely by following some instructions based on manipulating the Chinese symbols, Searle creates a periscope into a really abstract concept, showing that this computational state does not produce a mental state, because the man, nor the “system” he creates, truly understands Chinese.

      Delete
  17. While Searle may have tried to refute the Systems Reply to his Chinese Room argument, but I think there are actually some merits to it that weren’t fully discussed. If you look at a factory that makes cars or shampoo, no single one person knows exactly how all the pieces come together to make the final product. Each person on the assembly line is only responsible for one discrete stage in the production of a specific item and shouldn’t be asked to provide their knowledge on how the item as a whole is constructed. When you look at the bigger picture, however, you see that the factory as a whole knows how to make a bottle of shampoo with all the proper chemicals and the right label printed on it. Each person in their discrete stage does in fact know and understand their own step in the process though so it wouldn’t be a lack of understanding, it would just be a lack of understanding of the whole.

    ReplyDelete
  18. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it” (Harnad, 2001).
    This is what Harnad equates to Searle’s Periscope, the method with which Searle exposes the “soft underbelly of computationalism”. Adding to this, if I were to believe in computationalism and the implementation-independence of computational states, would it not follow that our mental state or “software” could be run by a computer? In this way, could we not transfer our mental state to another physical being other than ourselves? This standpoint could be used to explain the difference between to monozygotic twins. They both have exactly the same genome and exactly the same anatomy and for the sake of the argument, exactly the same environmental variables. Yet, their consciousness and mental states are not the same. This exemplifies the duality between the “software” vs the “hardware” at the heart of the Computationalism’s second tenet.

    ReplyDelete
  19. "This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state -- nothing to do with the mind." I struggle to understand what is being conveyed through this passage.. Does this mean that all computations are unconscious? What about human based computations, wouldn't they be done in a conscious state rather than an NONconcious state?

    ReplyDelete

  20. RE: "Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally."

    It is undeniable that wholeheartedly understanding the workings of a D4 duck would be equivalent to understanding the workings of a real duck. At the level of D3, the distinction between a real duck is structural, although their functions are indistinguishable. My question is where exactly does D3 become D4? The microfunctional continuum between D3 and D4, which Searle described as the dependency of functional mechanisms on structural mechanisms (for example: webbed feet for swimming) makes it increasingly difficult to discern the switch from D3 to D4.

    ReplyDelete
  21. “[…] the force of his CRA depends completely on understanding’s being a conscious mental state, one whose presence or absence one can consciously (and hence truthfully) ascertain and attest to (Searle’s Periscope).”

    When I was reading Searle’s paper, it was unclear to me what he meant by “understanding”. Dividing understanding into conscious and unconscious understanding, as Harnad did in his paper, helped me better understand the CRA. Indeed, the person in the Chinese Room would not emerge with a conscious understanding of Chinese, but they would unconsciously understand some of the rules of Chinese by virtue of the computations they were performing. However, I am still unsure what Harnad means by the expression “Searle’s Periscope”. From my understanding, it is a variation on the “other minds” problem, and consists in getting into the same computational state as another entity in order to experience its mental states. Harnad states that this is possible, but I am still unclear on how this can be done.

    ReplyDelete
  22. “The reason we have that long-standing problem in understanding how on earth mental states could be just physical states is that they are NOT! Mental states are just computational states, and computational states are implementation-independent. They have to be physically implemented, to be sure, but don't look for the mentality in the matter (the hardware): it's the software (the computer program) that matters. “ (page 4)

    Does this suggest that if we believe in computationalism, and we decide to study the brain as a machine, that in order to gain insight to the brain functioning, we should not look at the physical brain regions activated but instead we should investigate at the brain processes being activated?

    ReplyDelete
  23. The computer program has to be physically implemented as a dynamical system in order to become the corresponding computational state, but the physical details of the implementation are irrelevant to the computational state that they implement -- except that there has to be SOME form of physical implementation. Radically different physical systems can all be implementing one and the same computational system. (Pages 3-4)

    Does this suggest that if computationalism is true, then different individual’s physical brains are the variation hardware, and that the brain processes are the software of the system?

    ReplyDelete

  24. In this critique, it is claimed that only T2 is vulnerable to the CRA. In my opinion, T3 would be as well - yes, it is indistinguishable to the point of its sensorimotor capacities and behaviour from humans, yet it is still running a program of symbol manipulation based on rules (computation) and therefore cannot "understand", as Searle would put it. In my opinion, to say that mental states are simply computational states does not explain the complexities of the human mind, and leaves many questions unanswered.
    Secondly, I cannot fully grasp the concept of Searle's Periscope. It is based off the assumption that there are mental states that occur due to being in the right computational state, and if we can get into that computational state, we can check whether or not another machine has the mental state in question. But how can mental states, such as understanding, emerge from a computational state? From simple symbol manipulation, inputs and outputs based on a given set of rules? In all mental states, consciousness and awareness play important roles, and in computational states they are non-existent. It does not seem to me that one is transferrable to the other.

    ReplyDelete
  25. “but surely an executing program might be part of what an understanding "system" like ourselves really does and is (so the "System Reply" was right too).”

    comments: why was this position seem valid at the time? it seems like the whole point of Searle executing the same functions as the computer was to show that there is this lack of understanding in what computer does when it manipulates symbol, so Searle being part of a whole system that “understands” seems like a very weak rebuttal. What does it even mean to understand but not be aware of that understanding? There would be simply no way of verifying if the “system” truly understands.

    ReplyDelete
    Replies
    1. "comments: why was this position seem valid at the time? it seems like the whole point of Searle executing the same functions as the computer was to show that there is this lack of understanding in what computer does when it manipulates symbol, so Searle being part of a whole system that “understands” seems like a very weak rebuttal."

      I think that Professor Harnad's statement that the Systems Reply was right is a bit misleading. I think that all he's saying is that it's possible (and probably necessary) that there be computational element in a cognitive system. Even though computations on their own don't produce understanding, computations + feeling might. But the implication is that whatever is doing the feeling can assign meaning to the output of the computations. This actually exceeds the scope of both the Chinese Room Argument and the Systems Reply (they're arguing about what can be achieved through computation alone, not with something additional).

      "What does it even mean to understand but not be aware of that understanding?"

      I'm not sure what it means to understand but to not be aware of understanding. It could be what Professor Harnard refers to as an unconscious mental state in a conscious being (i.e. muscle memory for dialing a phone number without recalling the actual numbers). However as Professor Harnard points out, even an unconscious mental state in a conscious being can only achieve so much. It's one thing to be able to speed-dial a number by hand, it's another to be able to pass the Turing Test.

      "There would be simply no way of verifying if the “system” truly understands."

      While it's true that it's impossible to prove whether the system truly understands or not, it seems extremely unlikely that Searle, performing a bunch of symbol manipulations on paper doesn't understand what he's doing, BUT that he, the paper, the instructions and the room somehow come together to produce understanding. The very argument the Systems Reply makes also is its weakness.

      Delete
  26. I would like to more closely examine the statement you made in this article, "It means that we cannot do any better than TT, empirically speaking. Whatever cognition actually turns out to be-- cognitive science can only ever be a form of "reverse engineering"." I do not disagree that cognitive science is the process by which we reverse engineer the mind in hopes of discovering its structure and abilities. What I will question is the Turing Test as the gold standard of this reverse engineering. It appears that the Turing Test is the starting point for many scholars of cognitive science, not least of which is Searle himself. I can't help but wonder if the Turing Test had been presented a different way, for example trying to imitate a human being through math problems, or through the ability to sketch a picture, would the philosophical arguments going on in the field be the same as they are now?
    I'll explain what I mean with this example. Of course, when Searle is in the Chinese room, he is simply using a rule-book to help him navigate the Chinese characters, and therefore never truly learns the language, and cannot be thought of as an "understanding system". I wonder if instead he was basing his argument on a TT that used math problems to play the imitation game. Here, I suggest that Searle, through the process of being in the Chinese room with the rule-book and problems, would eventually learned the "language" of math and started producing answers with understanding, thus refuting his own argument. In other words, if the Turing Test or your D2 was not so language-centric, or if the language in question was one of numbers or another easy-to-learn universal system, would this conversation about what it is to "understand "be different?

    ReplyDelete
    Replies
    1. I disagree that math or even sketch replication is any different than Chinese to a non-native speaker. I don't know about you but in any math class I have been in where I have not been taught the meaning of equations but have simply been taught a series of rules and steps to follow, I do not UNDERSTAND the math, I simply memorize a series of rules and execute them. However, I do have some basis of mathematical understanding and therefore can gain a small understanding of this new series of rules, but if I really never knew math (or Chinese) I really don't think I would be able to ever understand what these rules meant regardless of how well I could remember how to execute each rule.

      Delete
  27. According to tenet 2, “mental states are just computational states”, so what couldn’t we induce mental state in a computer? Isn’t that the whole issue that needs to be resolved in order to be able to make computation cognition? In my understanding, emotions are mental states. However, the cannot be induced into any software or robot, even if it is implementation-independent.
    Moreover, I think that comparing the reverse-engineering duck to reverse-engineered human candidate is a poor comparison for several reasons. First, the duck cannot, for example answer questions about a story or giving any implicitly information. Thus, there is less mental material to be reproduced. Second, the behavior of the duck can be easily reproduced but the reasons for the behavior cannot be fully understand, nor communication by the model. The cognitive capacity of the duck is also not fully understood by humans and thus, cannot be reproduced entirely by any programs.
    On the other hand, the human cognitive activity can be explained in much more details and seem to be more complex. So even if the Duck model may succeed, it cannot be compared to the human one, whose capacity is much larger and complex.

    ReplyDelete
  28. “Searle's right that an executing program cannot be ALL there is to being an understanding system, but wrong that an executing program cannot be PART of an understanding system”

    I agree with this opposition to Searle’s all or nothing approach to program execution and understanding because isn’t performing manipulations integral to eventually assigning it meaning and thus understanding. Can we have understanding without executing the program? I don’t think so.

    “So there is really a microfunctional continuum between D3 and D4”

    I think here the professor is positing whether we need T4 to pass T3 because to be functionally indistinguishable at the microlevel, wouldn’t we require the same structure aka T4?

    Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental 

    I agree because this claim is kind of air-headed, such that anyone can make any claim using the words “it may happen when”. By that argument a lot of things “could” happen but will it or does it? They can’t tell and therefore it falls short of permissible argument.

    “It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.”

    But can’t subconscious or unconcious understanding be a part of understanding? Or is it a case of their relevance?

    “Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function”

    I have come back to add a comment to this section after doing Fodor’s reading as now I understand what the Professor meant by this. Sure Searle disproved one explanation of cognition: computationalism but how he thought that gave him the ability to know where to direct consequent exploration is unfounded. I don’t think he did enough thinking regarding how localizing functionality in the brain still tells us next to nothing about the causal mechanisms which is what cognitive science is concerned with. I want to ask Searle what he thinks of that? Does he still defend his stance?

    ReplyDelete
  29. This comment has been removed by the author.

    ReplyDelete
  30. “Many, including Zenon, thought that the hardware/software distinction spelled hope not only for explaining cognition but for solving the mind/body problem: If the mind turns out to be computational, then not only do we explain how the minds works (once we figure out what computations it is doing and how) but we also explain that persistent problem we have always had (for which Descartes is not to blame) with understanding how the mental states can be physical states: It turns out they are not physical states! They are computational states.”

    This made me think of how Descartes thought that the soul was in the pineal gland and this was where our thoughts are formed. I believe in your argument saying that mental states are computational states and not physical states. Descartes would probably compare the pineal gland as physical software where it would be possible of running our thoughts and our mind. This goes with the problem of neuroscience and localizing areas of the brain where our behaviors are formed. For example if broca’s area has a lesion then someone has broca’s aphasia. When the pineal gland has a lesion or has a cyst on it then there is cognitive declines but we could never say if it is responsible for internal thoughts since these are hard to measure. This is another argument between the battle of cognition and neuroscience. The mind can’t be localized in the brain.

    ReplyDelete
  31. I had trouble understanding some of this article. I understood that Searle essentially believed that for something to create human performance or human-like cognition, it is a necessity for it to have some physical form. We’ve seen that computation is implementation-independent, and that software that can produce given capacities can be implemented into any “hardware” and will be able to still run. Searle essentially makes he himself the device and shows that at the T2 level he does not encounter the other-minds problem, and it is fully implementation independent. Although, since T3 includes sensorimotor capabilities, we cannot pass T3 with implementation-independency, considering the other minds problem. Without knowing the “other minds,” there is no way to confidently say whether mental states are just “states” or a form of computation. Cognition could be computation, unlike Searle’s point.

    ReplyDelete
  32. I really enjoyed reading all of the points and replies made against Searle’s argument in this paper. I wonder if eventually, if being in this Chinese room for 50 years, if Searle would not have picked up on some of the components to the language. Though I don’t speak Chinese, I assume if the same argument was applied to English (Searle being given input and then a complex list of rules to generate an output) it would be inevitable for him to learn and categorize some of the information after enough exposure. For example, even just by picking out and recognizing words that appear more often (The, a, is, etc) might help facilitate Searle’s understanding of the foreign language. Conversely, I wonder if Searle’s Chinese room system could truly past a T2 verbal equivalence test, because he would fail to pick up on many nuances in conversation. Additionally, he wouldn’t be able to provide novel or subjective input like a true conversational partner might be able to. For example, when asked “how are you?” in Chinese 50 times, you would expect Searle’s system to have him return the same answer each time (“I am good”). A real person would not reply to the same input with the exact same output every single time, but in Searle’s conceptualization of computation, this is inevitable.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...