Saturday, January 6, 2018

10d. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

47 comments:

  1. “How does Searle know that he is not understanding Chinese when he is passing the Chinese TT by memorizing and executing the TT-passing computer program? It is because it feels like something to understand Chinese. And the only one who knows for sure whether that feeling (or any feeling at all) is going on is the cognizer -- who is in this case Searle himself.”

    This is a very pleasant article encompassing a significant part of the course so far. It is very straightforward and very comprehensive. I hadn’t grasped this notion until now, but the idea that Searle knows that while he’s performing the Chinese Room thought-experiment, he isn’t cognizing, simply because he doesn’t feel the way he does when he is cognizing, is so simple it’s hard to miss, and yet I did. I find this very enlightening, as most theories go out of their way to attempt to explain the reasons behind the idea that cognition isn’t mere computation, while this very ‘primitive’ idea is standing right in front of us. Searle simply perceives that cognizing involves certain grounded meanings, understanding the symbols beyond their shape—beyond their squiggles and squoggles—and in the CRA he doesn’t feel the way that grounded meanings feel with respect to one’s fundamental understanding. Very nice article.

    ReplyDelete
    Replies
    1. Well, I don't think Searle thinks much about symbol grounding, but he certainly knows what he is not understanding Chinese, because it feels like something to understand (and not understand) Chinese.

      Delete
    2. Elvire, Searle's emphasis on understanding went over my head, too. Although Professor Harnad stated that this was a key component in understanding why and how Searle knew that he didn't know Chinese even though he passed the TT. It feels like something to understand, and now, in the context of the hard problem, it makes much more sense to me. The feeling of understanding is one of those more subtle feelings (vs pain or hunger) and it didn't seem like the feeling of understanding was enough to dispute computationalism. But alas, now it all makes sense!

      Delete
  2. This article reiterates much of what was said in the course. Turing demonstrated the importance of finding out how we do everything as opposed to just looking at the behavioral output. The only way for us to do this would be to reverse engineer cognitive processing by creating a model that can do all these things like we have done for other organs of the body. The brain does not perform a vegetative function so this process is a lot more complicated and we are still not close to having any model which can pass T3 or even T2 of the Turing hierarchy. Even if we manage to reverse engineer cognition and create a model that passes T3, a model with sensorimotor indistinguishability, we cannot know if we really have succeeded. Since it feels some way for us to think the things that we do and to experience the world, a device which has passed T3 must also be able to feel as we do in order to really say we have reverse engineered cognition. Searle showed that just because something is in the same computational state as something else that doesn’t mean it is in the same feeling state. Because of the other minds problem we cannot know what the model is feeling, if it feels at all. Thus even if we have succeeded, we would never know it.

    ReplyDelete
    Replies
    1. The brain doesn't perform just vegetative functions...

      Delete
  3. This article was an excellent summary of the ideas in the course so far. It helped to see how the consensus on the easy and hard problem evolved from Turing through Searle until now and how to differentiate the two. My only question is this: why do we think that Turing was aware that T2 would not pass T2? In other words, why do we believe that Turing knew that a T3 equivalent machine would be needed to pass the TT? When Turing proposed the test, he suggested that anything that could pass the verbal test would pass. Thus, Chinese-Searle would be able to pass the test (understanding obviously aside). Was Turing suggesting that as a criteria for passing or is understanding necessary for passing the test at all?

    ReplyDelete
    Replies
    1. You are asking why "Stevan says" Turing was not a computationalist.

      (1) Turing dismissed the hard problem of feeling as well as the other-minds problem with his "doing-only" TT.

      (2) Searle's argument is not very deep, and I'm sure Turing would have been aware of the symbol grounding problem

      (3) His interest was in the doing-capacity, not the feeling, so for him, the fact that feeling would be missing with computation alone was not of particular interest (and unknowable anyway, because of the other-minds problem)

      (4) Turing of course holds the Strong Church-Turing Thesis: that computation can simulate just about anything (including the brain)

      (5) So whereas Turing would know that you need sensorimotor symbol-grounding to generate T3 capacity, he would also know that that could all be simulated computationally.

      (6) Turing probably also took into account the power of language (and its similarity to the Strong Church-Turing Thesis) and hence assumed that T2 was a strong enough test for cognition ("intelligence"), since it would require T3-grounded symbols to pass T2

      If forced to take a porition on feeling, he would agree that feeling would probably be absent from T2 if T2 could be passed by ungrounded computation alone (but that it probably could not be). So T2 presupposes T3-capacity even if it does not test T3 directly. And all of it could be simulated computationally (both the T3 robot and its world) anyway...

      "Stevan says..." dixit

      Delete
  4. I still find it very interesting that a mathematician (Turing), really spurred the advent of cognitive science. At first it may seem like computation doesn't have much to do with brains, but sure enough, that intuition hasn't completely got it right - computation can do a lot of neat things that we only thought cognizers could. Computation seems to be a very important part of the picture, but perhaps not the entirety of the picture. I’m curious why there’s so much resistance to Searle’s knockdown argument of computationalism. Is it really just a misunderstanding of what Searle meant? Even if the computationlists did really get what he meant, would they even be satisfied with a philosophical argument and not a mathematical proof contra their views? However, that may be begging the point (e.g. there's no mathematical proof because that's the wrong place to look?) Anyway, it also makes me wonder how the explanation of why and how we feel fits in with computation (and more, what kind of explanation it’ll be).

    ReplyDelete
    Replies
    1. I find it especially interesting that the thing that Turing engineered was just a mechanical object that solved the algorithm used by the machine the Germans used to create their codes. This, to me, never seemed like it would be enough to constitute thinking. Even the rest of the mathematicians who were involved in trying to solve the algorithm each day before the code switched weren't making educated guesses about the code, they were going through in an almost mechanical way trying to guess at what the code might be and were almost never correct until the time was almost up before the code switch. They didn't even understand what they were doing fully, so how could a machine that does this understand?

      Delete
  5. This paper sets out to explicate that even if a model feels, we would not be able to access that information. Cognitive science’s agenda, as set by the Turing test, is the “reverse-ingineering of the capcity of humans (and other animals) to think” (Harnad 2012). It begins with Turing’s Imitation game and his attempt to use it as a means to explain cognition in a purely verbal sense. As we discussed in class and by Searle, the Turing machine only requires the correct manipulation of symbols and therefore cannot account for cognition. Following the Turing machine, Searle’s Chinese Room Argument argues that cognition is not computation (an argument against computationalism). In the absence of sensorimotor experiences, no symbols would exist to manipulate, which brings us to Harnad’s Symbol Grounding Problem. In order to pass the Turing test, we need not a T2 but a T3 machine, which has sensorimotor abilities. Consciousness is full of uncertainty. Even if we think we are feeling, how do we know if these feelings are real or simulated? How do we know we are not living in a simulation?

    ReplyDelete
  6. I’m quite positive that performance capacity is not a byproduct of feeling. I’m also under the assumption that feeling is not a byproduct of performance capacity. If it is, where is the threshold of doing capacity that leads to feeling? I think they are two separate problems, and that seems to be the position stated in this piece. However, there does seem to be a critical link between the two, and it’s been stuck in my mind throughout this week’s readings.

    Cognition isn’t computation, it’s also categorization. For categorization to work, symbols need to have meaning. However, if symbols get their meaning through symbol grounding AND feeling, then how could we ever create a robot that has our performance capacity if we can’t program feeling?! Would it ever be able to categorize at a capacity that we can??

    ReplyDelete
    Replies
    1. I’m still confused about this as well, although in my case it pertains specifically to language. I.e. if you need access to meaning to produce language, then you need feeling (because meaning requires feeling). But we’ve contended that causal explanations of feeling are impossible. So how can you produce a robot that can do everything a human can if something essential to any of those doings is causally inexplicable?

      I think (but might be wrong) that our problem is that we’re mistakenly equating symbol-grounding with meaning. Meaning = grounded symbol + feeling. The symbol-grounding problem doesn’t disprove computationalism for reasons related to feeling, it shows that sensorimotor interaction is needed to achieve the requisite performance capacity (i.e. to ground the symbols, and be able to produce language). In other words, you have to pass T3 to pass T2. Presumably you can pass T3 as a zombie. If I’m understanding correctly, to do the categorization aspects of cognition, feeling (and meaning) are not necessary. Performance capacity then, is still solvable (in theory) if the hard problem is unsolvable. It doesn’t mean that we’ve explained all of cognition (because we still haven’t explained feeling), but that’s what distinguishes the easy problem from the hard problem.

      Delete
    2. You bring up a really good point, Willem. One of the features of language is semantics (meaning), and that unlike computation, language has syntax that is not independent of semantics – this brings up your question of ‘how do we implement language in a robot if meaning (feeling+symbol grounding) is central to it??’ HOWEVER, do we need to explain meaning to explain language? I don’t think so. Although we want to reverse engineer capacities to which feelings seem central (love, anyone??), it seems like the functional properties are enough to provide causal explanations sufficient enough to reproduce them (as you showed with the categorization example). If we consult Harnad’s T3 passing robot, whom ‘Stevan says’ could probably feel, wouldn’t that mean the feelings arose as byproducts, since we certainly won’t have solved the insoluble hard problem before reaching T3, so we couldn’t have put them (the feelings) there ourselves? This would invalidate Ayal’s presupposition that feelings can’t be a byproduct of performance capacity. Why don’t you (Ayal) think that’s possible? I do agree that even if we agree that feelings are byproducts, it doesn’t explain WHY they exist at all, or HOW they arise to begin with.

      Delete
    3. Willem and Christina make some good points here. To clarify what Ayal is getting at (or what I think he is getting at), I think the question here is: if we begin to design robots that get closer and closer to T3 at what point do we have to ask ourselves whether they have feeling? Further, at what point do we have to ask whether they have feeling like humans (not like dogs, or chimps)? So to agree with what Willem said, performance capacity does not require feeling (thus the easy and hard problem), but whether feeling follows from performance capacity is unclear (in response to Christina, I think this is what Harnad argues). But to clarify something on Willem’s point here, I do not think you need access to meaning to produce language, this is exemplified by the CRA. So we can teach a robot to formally speak as we would, but once this robot has reached T3 capacities and passed the Turing test, it is unclear whether it would actually generate meaning from the symbols it is manipulating. However, ultimately, it is impossible to determine whether or not a T3 truly is generating meaning, this is a problem of other minds. Further, if Searle is actually a T3 (or Isaure) but without meaning, would we truly believe that none of his relationships in life generated feeling or meaning? Would we feel comfortable kicking him?

      Delete
    4. This is the thing that really gets me too. I don't know if I believe they are a byproduct or intrinsically intertwined and whether this is a worthy distinction at all. I see byproduct as doing generates a feeling of doing in the most basic form whereas intrinsically intertwined that doing and feeling cannot be unlinked and one isn't generating the other but they are both happening simultaneously.

      I think it is necessary to discuss feelings because it is essential to our cognition and our understanding of our experiences. But when I think about generating robots to do human things, I don't think they do truly have the same capacity as humans, at least not in the way we are currently making them. For example a chat bot can carry on conversation fairly well but is it understanding? No, certainly not. It's just following computation. So it is a model for conversation rather than a conversationalist. And maybe that's all robots will ever be or maybe not. I'm not certain.

      Delete
    5. I think the difference here is that Turing was interested in what we now call artificial intelligence while Harnad is more interested in cognitive modeling. Turing was interested in the easy problem, or how to make a computer do whatever we are able to do. However, Harnad and others in the Cog Sci community are now more interested in how organisms can feel the way they do and how we can implement that theory into computers.

      For robots to actually become sentient, we would first need to understand how exactly we are sentient. But that's part of the hard problem and something we may always struggle to answer. So maybe computers will not be sentient in the next 10 years, but maybe because computing speed is always increasing, what took us thousands of years of slow evolution to develop language might take computer programs a few years to learn? But there is always the issue of symbol grounding and whether these machines will really understand.. Thus, we make our way further down the rabbit hole, with no real end in sight!

      Delete
    6. Based off what Stevan says, it wasn’t that Turing was less interested in the hard problem or more interested in the easy problem – so much as he didn’t believe the hard problem was something we would ever be able to answer scientifically. He sidestepped feeling because (again, according to what Stevan says) he thought the closest we could get to explaining cognition was to tackle all the doings and assume that that likely, if we got that far, the robot just might actually feel – even if we could never prove it.
      Also, I disagree that “for robots to actually become sentient, we would first need to understand how exactly we are sentient”. Obviously, to answer the hard problem we would need to understand and then casually explain how and why we are sentient, but whether we actually learn that has no inherent effect on whether robots we build can be sentient. Answer the easy problem – replicate all the doing, build a T2, T3, T4 or T5 but even then, we will have no way of knowing whether that robot is sentient or not! Stevan says it probably can’t be T2 and up unless it feels but, as we were cautioned earlier in the semester, take that with a grain of salt.

      Delete
    7. I agree with you Carlee that Turing thought that if we could simulate everything that we could do, we would have it all figured out. Or, we've seen with the Chinese Room that it is not the case + if we take out feeling we can still all that we can do.

      Delete
  7. I found that this article concisely and chronologically summed up the central topics we have covered in the course thus far. As we covered in week 3, the article highlights that Turing devised the Turing Test in an attempt to generate a causal mechanism capable of doing everything a human can do. Searle’s CRA revealed that a successful cognizing model cannot be purely computational, as successfully computing Chinese and hence speaking it lacked the feeling of understanding speaking Chinese.

    In the present paper however, Harnad outlines that Turing was not a computationalist – he did not believe cognition and thinking was purely computational. We must remember that what Turing proposed was just an email pen-pal test (T2). Turing believed that anything in the universe could be simulated by computation (the Strong Church-Turing Thesis) – but Turing understood the limits to computation in explaining cognition and thus did not believe that generating a computational model that passes T2 would think and feel. Turing was focussed on generating a causal mechanism for the doing capacity of cognition, rather than the feeling capacity. He proposed a solution to the easy problem not because he did not recognize the hard problem, but because he believed it to be unsolvable.

    Finally, this leads to me ask – was Turing right in avoiding the hard problem? Is it unsolvably hard…?

    ReplyDelete
  8. This paper summarized really well what we went over in the course so far and it really helped me understand the link between Searle, symbol grounding and ultimately the Turing Test. It would indeed not make sense for Turing to have believed cognition was all computation, and it is now clear to me why Searle could claim to not understand Chinese while the whole system still seemed to understand it. Symbol manipulation in the end teaches us nothing if there is no symbol grounding and without it, none of the subjects we talked about in this course would make sense.

    ReplyDelete
  9. This was a great brief article, which gave an overview on all the themes we have touched upon in this class thus far. It felt good that I could follow along with ease, meaning that I'm understanding the material correctly.

    Regarding Descartes' Cogito which states "I can be absolutely certain that I am cognizing when I am cognizing", it's interesting to note that this is in the present tense. This means we can't apply it to the past, or for feelings we will have in the future, for that matter. Was I cognizing in the moment that just passed a second ago? I can't say for sure, because I only know that I am feeling right now and cannot confirm whether the same holds for a minute ago, or this morning, or 10 years ago. We can only be certain that we feel at this very specific moment in time. In a way, this is almost like a spinoff of the Other Minds Problem but applied to ourselves within the context of time.

    ReplyDelete
  10. This paper was a good and concise summary and integration of what we have learned in this course. A statement that stood out to me was:

    “So Searle is simply pointing out that the same is true of computational simulations of verbal cognition: If they can be done purely computationally, that does not mean that the computations are cognizing.”

    This summarizes the strengths and weaknesses of computationalism in one sentence. According to the strong/physical CT thesis, computation can stimulate cognition, but this kind of cognition is not cognizing. Searle points this out in his CRA that meaning is missing if all he is doing is input-output computation. In addition, cognition includes consciousness, and to be conscious is to be able to feel. Feeling is also missing from computation, which highlights the one limitation of the Turing Test’s ability to solve cognition. Nonetheless, I think most of the work has been done by Turing if we conclude that the Hard problem is insoluble, since the TT can help us solve the easy problem through reverse-engineering

    ReplyDelete
  11. So essentially Turing may not have been a computationalist to begin with and just felt the Turing machine was the best way to at least predict doing capacity as feeling would be out of his grasp. This makes sense as the Turing machine could be used to measure the doing capacity but never the feeling capacity, therefore leaving room for Searles Chinese room argument to work as Searle could DO yet not FEEL like he was understanding Chinese which showed that there is more than just doing involved.

    ReplyDelete
  12. This comment has been removed by the author.

    ReplyDelete
  13. If meaning is symbol grounding plus feeling, can we achieve meaning without first solve the problem of feelings? However, from the lectures and reading so far we already knew that the problem of feeling is insolvable because there is no such a thing as an unfelt state, and secondly there is no causal role of feelings to explain anything. If we cannot answer the question why and how, how can we achieve feelings in AI, and further create understanding and meaning. Or if we establish a system that encompasses executive algorithm and sensorimotor categorization capacity, the feelings will arise simultaneously?

    ReplyDelete
  14. A strong article that encompasses the flow of logic in this course. My favorite part being: so Searle is simply pointing out that the same is true of computational simulations of verbal cognition.. If they can be done purely computationally, that does not mean that the computations are cognizing. I find that understanding the difference between what humans do (cognizing) and what computers do (computations) is the bridge between either accepting that the hard problem is likely insoluble or else you dismiss Searle's CRA, and you are lead to believe that a TT passing T3 robot would undoubtedly encompass all there is to cognition (feeling + doing).

    ReplyDelete
  15. "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition."

    I found that this article not only summarized a lot of what I was meant to take out of this class, but also tied together a lot of loose ends. A persistent question nagging me was how a turing test passing robot could be cognizing if we didn't have any answer to the hard problem. Would a turing test t3 or t4 or t5 robot be an answer to the hard problem?

    This article helped me understand that turing wasn't expecting to magically have a robot that could feel everything. Somehow this sentiment got lost along the way for me. I really appreciated the further explanation that a TT robot may be computational and dynamic but not necessarily feeling.

    This article made Searle's chinese room argument that much more solid in my understanding of the course.

    ReplyDelete
    Replies
    1. I also really enjoyed this reading as it summarized all concepts we have discussed from defining what cognitive science is (reverse engineering to explain how we think) all the way to describing the easy/hard problems (do/feel), referring to the TT, Searle’s Chinese Room thought experiment, the symbol grounding problem, strong and weak CT, the Cogito, and compuationalism.

      RE: Reagan
      I would like to address your question: “Would a turing test t3 or t4 or t5 robot be an answer to the hard problem?”
      I don’t think any TT level would help explaining the hard problem. This is the way I convince myself that Turing was not a full computationalist, because if he were, then he would not have designed a test where even if you pass the highest level, you don’t explain cognition. If Turing would’ve thought that computation is all there is to cognition, then once you pass the last level of the TT you would have explained all cognition, but this is not what happens. We have called Isaure a T3 all semester, but why not a T5? For all I know she is made of the same chemicals as all humans. And even then, I can doubt that she feels. So, to answer your question, I don’t think passing T3 or T4 or T5 would be an answer to the hard problem.

      Delete
  16. "[Turing] was perfectly aware of the possibility that in order to be able
    to pass the verbal TT (only symbols in and symbols out) the candidate system would have to be a sensorimotor robot, capable of doing a lot more than the verbal TT tests directly, and drawing on those dynamic capacities in order to successful pass the verbal TT."

    "Stevan says" that T3 is the minimal level capable of cognition, and that in order to pass T2, you would have to pass T3 first; the same argument that Turing was making. Is there a point to having a level less than T3 at all, then? The level below T2 (verbal) is simply T0 (toy), and if T2 actually would have to be T3 in order to pass as human in a verbal capacity, shouldn't we just start the scale at T3? Is there something unique about T2 that distinguishes its abilities from what T3 can do?

    ReplyDelete
  17. This comment has been removed by the author.

    ReplyDelete
  18. This reading essentially reiterated the concepts we have learned thus far in the course, including Turing’s theory of cognition explained by the Turing Test, computation, Searle and his Chinese Room thought-experiment which challenges Turing’s theory of cognition as just computation, as well as the symbol-grounding problem. Due to the clear issues with Turing’s theory, it is thought that Turing was not a true computationalist when it came to cognition, but that he believed that anything could be simulated by computation a closely as we can, which is known a s the Church-Turing Thesis. Again, Searle’s argument comes into play, as he argues that a simulation is not the same as the real thing (i.e., a simulation of cognition is not the same thing as actually cognizing). Finally, the reading ends with Descartes’ idea of “Cogito,” in which it feels like cognizing when you are cognizing; i.e., to cognize feels like something. This connects back to Searle and the Chinese Room, where he is simply performing symbol-manipulation without feeling like he is understanding what is happening. Overall, this reading was nice a review of the easy problem (doing) and the hard problem (feeling) in relation to Turing’s theory of cognition.

    ReplyDelete
  19. It seems to me that Turing was smart and just left the hard problem of cognition alone. However I wonder where in his work he indicated his belief in the importance of symbol-grounded perception? It seems to me that when he designed the TT he was just doing it to try and simulate the doing of communication in consciousness. Kind of in the vein weak CT thesis, since language is a series I/O, it can fall under the category of being something a mathematician can do and therefore simulated. While he may have been aware of symbol-grounding, I don't think the point of his test was to address that (nor did he make it a factor in it), instead he was looking at the mechanisms behind doing in verbal communication.

    ReplyDelete

  20. This paper is basically a summary of what we have covered thus far in the course, yet somehow clears a lot of things up for me - mainly the close relationship that exists between the concepts and how they are all tied together. I like how it finishes with the open-ended question of the hard problem, hinting that everything we have learned so far has been to get to this point - to say that what we really want to know we will never know. This paper demonstrates nicely the truth that explaining the capacity to do does not explain the capacity to feel, and speculates that Turing did know this when theorizing about computational models that could pass the TT. I'm starting to think I'm slightly brainwashed, since this was a light and easy read for me and I am in agreement with most of the points presented, whereas at the beginning of the course it would have been like reading Chinese. Now I know what Searle meant when he said it feels like something to understand…

    ReplyDelete

  21. Like many others have stated, this article does a good job of summarizing a number of the main points that we have touched on over the course of the class. However, the more we read about and analyze the main problems of cognition, I cant help but feel slightly disheartened on the quest! With each reading and class, it becomes more clear just how seemingly insoluble the easy problem is -- let alone the hard problem. If we cannot answer the hard problem, how will we be able to answer the easy problem? Is there a way to move forward or is this the crossroads in which we have to make a decision to either accept the impossibility of solving the hard problem or accept that robot that passes T3 can actually 'feel'?

    ReplyDelete
  22. This text concisely summarizes many of the ideas we have touched upon this semester: the easy problem, the hard problem, computationalism, the Chinese Room Argument, the symbol-grounding problem, the Church-Turing thesis, Descartes’ “Cogito”, etc. It very clearly explained the various connections between these topics, which I found helpful.

    “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.”

    The above point caught my attention. I did not realize that Turing thought that the hard problem was insoluble, and it surprised me that despite what this giant claimed many years ago, cognitive scientists today are still attempting to solve it. I would be interested in knowing more about cognitive scientists who believe that the hard problem can be solved, and some of the ways in which they think this can be achieved.

    ReplyDelete
  23. I really appreciate how this article beautifully tied up and brought together a lot of the concepts discussed in class. True to the course title and additional alliteration, categorization, communication, consciousness, computation and cognition were all addressed and outlined. A large problem in this field to me is that philosophers often talk themselves in circles with hypothetical thought experiments. You can truly poke holes in any argument if you evaluate it for long enough. That being said, I think another salient question about passing the TT would be "how quickly does the candidate need to respond?" Humans cognize immediately and unprompted, but computational models would need to take and process the input, which could take more time (though probably very minimally). Is it possible to create a model of computation able to respond just as quickly ever time to every input as a human? Likewise, computation has the advantage for certain processes like mathematics, in which they may actually respond faster and more accurately than a human would.

    It can be very easy to misconstrue the opinions of those who are no longer alive to defend themselves. As stated in the article, Turing probably did not believe that computation alone explained cognition. I really enjoyed the last portion of this article where Turing's theory of computation was integrated with a dynamic system of feeling to explain cognition. Though I have no idea what this might look like, I hope and believe that such a model would pass.

    ReplyDelete
  24. I really enjoyed this article as I found that it summarized many of the concepts that we talked about in class and also helped me better see the connections between the different ideas. To me, one of the most interesting things that we’ve discussed is the relationship between the symbol-grounding problem and Searle’s Chinese Room because it is a very strong, but obvious argument. Searle proved that cognition could not just be computation by relating it to understanding a language that is foreign to us.

    Although we may be able to manipulate the symbols of that language to produce a correct response (the same as we do in when plugging in numbers into an equation in math), it does not mean that we truly understand what we are saying or what is being said to us. This introduces the symbol grounding problem because we have to have a referent for the words that we say, in order to give them meaning. We cannot do this if we are only acting in a computational manner (meaning taking in symbols and putting out symbols). When grounding symbols, humans have to assign an arbitrary symbol to an actual referent. This allows us to be able to refer to an object, even when it is not physically present. In return, this allows us to manipulate, describe, and categorize our environment in infinite ways, which can be expressed by the combinational power of language.

    ReplyDelete
  25. I appreciated this reading because of its clear, succinct style and how it nicely summarizes many of the main topics discussed in this class. However, it made me question some implications of the intersection of the symbol grounding problem and the idea of a closed dictionary that we discussed in class (April 10th). Within some closed dictionary, some word has the basis of its meaning in a sensory experience (as stated by the symbol grounding problem). Other words within the dictionary get their meaning from definitions consisting of words that already have a meaning. Stevan gives the example of zebra being reduced to “horse" + “stripes," stripes being reduced to “lines” + “horizontal,” and lines and horizontal stemming from other words which eventually are based in sensation. How many words whose meaning comes from sensation are necessary in a dictionary? Is this number proportional to the size of the dictionary, or is there a consistent, essential set that is determined by UG/OG rules or other constraints of language? Additionally, how far can some meaning based in sensation extend into a dictionary-tree? Can sensation meaning carry on forever as definitions build upon each other, or is there some point at which definitions become to abstract?

    ReplyDelete
    Replies
    1. Hi Anna, I'm not exactly sure what you mean by "meaning from sensation" but the idea of the of the minimal grounding set is that everyone has a unique set of symbols that are grounded and we use this to build other word's meanings. We talked about reducing a dictionary to about ~1500 words whose meaning we know, and combining the words in the set to define all words in a dictionary. However, to ground this minimal set relies on our sensorimotor capacities. It seems intuitive though that the ~1500 words are more or less concrete. So, I think what you mean with words being based in sensation, actual refers to our sensorimotor capacities. For example, a unicorn is a word we ground through other words such as horse and pointy-cone horn (or something like that) because we can't actually observe a unicorn in the real world. But we can observe horses and pointed objects and other horns.

      Delete
  26. I almost wish that this reading was given earlier on in the course, but it’s evident why it’s perfect to be given now. For me, this reading summarizes the big ideas of the course. One of the most illuminating concepts was if Turing was a computationlist. When we were learning computation at the beginning, it seemed like Turing was a computationalist, although I never explicitly asked myself that question. Turing was a giant and I think, in retrospect, that the T2 was used for the TT because he wanted to eliminate biases. It’s clear that he probably knew that the only way to pass T2 was to pass T3. On the other hand, the strong Church-Turing Thesis was always a bit confusing to me. Any physical, dynamic structure/process can be simulated, but it does not follow that the everything in the physical world is just computation. This article answered questions I didn’t even know I had! I think maybe I’m use to courses providing answers to big problems, and therefore, I tried to think about solutions to other problems, and almost always connected it to the TT. But this course isn’t one of them. Instead, we framed the easy and hard problem and decided which can be solved by the TT, and which may be insoluble.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. I really enjoyed this article because it clearly outlined and discussed the different themes that we have covered in class. The easy and hard problem are clearly defined and are supported with arguments from Turing, "a computationalist" to Searle to Descartes. The problem still at hand is the hard problem to which we probably will never know how to solve if it is in fact solvable which is doubtful. How do we bring feeling into this. My major is cognitive science and until now, I have failed to see that this is the argument that is the basis of cognitive science.
    One thing I found that I need to keep in mind is Descartes argument about the Cogito. "but I can't doubt that I'm cognizing when I'm cognizing." This is something that is undeniable but what gets me is how do we use this to help us move forward towards approaching the hard problem?

    ReplyDelete
  29. The paper is a great summary of the key ideas about the easy and hard problems of cognition that we’ve learned in the course, with an emphasis on the Turing machine. I find the following passage clears some confusions I had before about Turing’s claims.
    “So I do not believe that Turing was a computationalist: he did not think that thinking was just computation. He was perfectly aware of the possibility that in order to be able to pass the verbal TT (only symbols in and symbols out) the candidate system would have to be a sensorimotor robot, capable of doing a lot more than the verbal TT tests directly, and drawing on those dynamic capacities in order to successfully pass the verbal TT.” To Put simply, this passage is saying that Turing is a not a computationalist as he is aware that being able to successfully pass T3 is a prerequisite to pass T2. However, as Harnad states later in the paper, Turing’s argument only explains the doing power, while not enough to give an explanation for feeling. One question that I have for this paper, and for this course, is that if a causal mechanism of how and why we feel what we feel is impossible at this stage, after we have every theorists and scientists agreeing with this claim, what’s the next step? After all, feeling does exist and simply saying that it's impossible to solve the hard problem does not give us more information than other beliefs.

    ReplyDelete
  30. This article was a very straightforward summary of some of the works we have studied thus far. It traced a clear path through the works of Descartes, Turing, Searle, and Harnad. A TT-passing being can be purely computation, and when all that is studied is "doing" and "doing-capacity" can be virtually indistinguishable from a human actor.

    I find the academics on the path to cognitive science interesting. That the field began with Alan Turing, a mathematician who was so clearly focused on functionality, input-output and observable behavior, it seems natural that his hypothesis about human cognition would have computation at its core. Because the "easy" problem is easy, it makes sense that that is the one Turing chose to examine in his construction of the Turing test.

    It makes sense that the logical next step for the field would be to examine the questions the Turing Test didn't attempt to answer: questions about feeling rather than doing. I suppose the question we will eventually answer is whether the hard problem is simply "hard" or whether it is impossible to answer: in other words, was Turing right to ignore it in the first place.

    ReplyDelete
  31. This paper was interesting in it’s way to grasp everything that we have learned thus far and suggest future roads to cognition. Since Turing could explain how we do something using computation it cannot explain the how and the why we feel when we perform a certain action. It’s evident that computation can explain certain factors in cognition but like we say in the Searle’s writing it doesn’t explain how we understand these computations and manipulation of symbols. It’s clear that the field of cognition needs to look at the how and why we feel and understand. How and why do we learn language? How and why do we feel?

    ReplyDelete
  32. “He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.”
    This statement is true in the sense that creating a robot that has the same verbal performance capacity as we do is very powerful in explaining cognition, but is it truly powerful in explaining our cognition? This robot could be weakly equivalent (input-output equivalence) to us but this does not ensure strong equivalence. This is why we have so many cognitive scientists working with primary source material – brains. Admittedly, this work also has its uncertainties and suffers from the limits of causation vs correlation. However, working with the human mind also offers us two frameworks for scientific discovery - first and third person analysis. Furthermore, study of the brain has the noted advantage of studying a system we know has the capability of feeling and cognizing and therefore offers a higher potential yield for discovery.

    ReplyDelete
  33. 1. This paper answered all the questions that I didn’t know I had on this topic. I feel like this paper was very nearly a summary of the course content, and that it made everything much, much clearer in my mind. I had not fully grasped the relevance of feeling in Searle’s Chinese room argument (CRA). After reading this paper, I can see that the only difference between Searle (in the CRA) and a real Chinese speaker, is that Searle knew he could not understand. For every third person, objective measure they performed equally – the only difference is in the first person measures. Therefore this first person data is invaluable, regardless of the unreliability of self-report.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...