Saturday, January 6, 2018

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

53 comments:

  1. “But Chomsky’s lesson to Skinner applies beyond syntax. Vocabulary learning – learning to call things by their names – already exceeds the scope of behaviorism, because naming is not mere rote association: things are not stimuli, they are categories. Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli, as in paired associate learning. To learn to name kinds you first need to learn to identify them, to categorize them”

    A child, after associating the symbol “word” (could be “chair” for example) to the category the child understands as a chair, the child will not only associate the symbol “chair” to the chairs the child has already seen. The child will extend such concept to all chairs. If all the chairs the child ever saw were blue, then when the child sees a red chair, the child still understands it is an object that fits in the category they created for the symbol “chair”. The child does not simply associate the word “chair” to the chairs the child saw in his/her life. Such a word learning process would be too long and effortful. Moreover, children have a tendency to overextend different objects in the categories they created; thus, a child might call a sofa a chair when they did not yet learn the word “sofa” for this special category of objects that look like a chair but are not part of the category “chair”. Thus, overextension demonstrates that children do learn new symbols through categorizing similar objects in one group, allowing the children to extend the category they learned to objects they have never seen yet. To do so, the child requires more than simple reinforcement from the parent, since this way the child would not overextend, it would wait for reinforcement by the parent to associate the new object they are experiencing to a symbol for that particular object.

    ReplyDelete
    Replies
    1. To learn when they are overextending or underextending a category, children need feedback as to when they are right and when they are wrong. Children have to learn which features are irrelevant to being in the category, and free to vary, and which features matter, and must be invariant. Reinforcement is feedback.

      Delete
    2. This is an excellent explanation of the beginnings of category learning and I agree with your interpretation both of how children to recognize objects and that they tend to overextend the categories they form to encompass never before seen objects. My question is this: once the category has been learned, what is preventing the child from overextending the category to an incongruous object? In your example, you suggest that ‘sofa’ would be something that a child includes in the category of chair when seeing a sofa for the first time. But what if the category they are using is not ‘chair’ but something more general i.e.: ‘somewhere to sit’? What if they associate something like a countertop or stairs to chair because chair, for that child, is simply an instance of the category ‘somewhere to sit’? Reinforcement seems like it solves part of the problem (what is correct to associate) but how does a parent prevent incongruity? Just food for thought…

      Delete
    3. Zachary, to categorize is to do the right thing with the right kind of thing. For most categories we need to learn, by trial and error, what's the right kind of thing to do what with. (Reduction of uncertainty among a finite number of alternatives on which consquences for you depend.) To learn to categorize is to learn (consciusly or uncomsciously) to detect the features that distinguish members from non-membersand to ignore the irrelevant features. This learning involves over-extending and under-extending, with corrective feedback (from the consequences of having done the right or wrong thing) until you get it right. Yes, that's "reinforcement," but in itself that explains nothing.

      Delete
  2. “Our minds will have to come up with those hypotheses, as in every other scientific field, but it is unlikely that cognition will wear them on its sleeve, so that we can just sit in our armchairs, do the cognizing in question, and simply introspect how it is that we are doing it. In this respect, cognition is impenetrable to introspection”

    You can sit in your armchair and introspect all day long about how our brain come to think, how thoughts come to mind, and yet arrive to no concrete answer in the end. Inner thoughts are the basis of our days, and yet we can’t introspect and understand how we come up with inner thoughts, or how inner thoughts simply “pop up” in our heads without warning. We do this process of inner thought since our infancy and yet we can’t introspect to discover how we do this process that we have been doing our whole life. Since introspection does not allow us to understand how we come to think in the first place, then I do not believe introspection is a proper way to study cognition, as mentioned in the article.

    ReplyDelete
    Replies
    1. I agree with the majority of your statement, Introspection is clearly not the way to understanding cognition. However, I'm not sure I agree with the concept of thoughts just popping into our heads. Several experiments (such as the famous one done by Libet), discuss the idea of free will, and explain that the thoughts that "pop up" might just be a byproduct of decisions that are made pre-consciously. Still a very articulate statement on your part.

      Delete
    2. I agree that the concept that thoughts "pop up" into our heads is a bit of a simplification. However, Libet's work is relatively dated at this point and a multitude of more recent research has cast serious doubt on his findings and protocol. Conversation of free will aside, I think the bigger issue here is that using the terminology "pop up" seems to imply that no other knowledge/background is related to the creation of thoughts. I would argue that the thoughts we have are based on years of learning, categorization and, importantly, forgetting. This process allows us to form concepts of how the world works, schema, that are quite disconnected form the time/place in which we originally learned them. For example, you would probably be hard pressed to identify how or when you learned that "dog" is associated with a furry, four legged creature, you just know it. Some have argued that creation of these schema, and their disconnection from the source at which your learned, is associated with transition of memories form the hippocampus to the cortex. When cortex loses connections to the hippocampus these memories lose their association with place/time, and the memories are stored long-term in the cortex itself. Regardless, what is important here is that the thoughts we have are intrinsically related to our understanding of the world around us, whether or not they seem to just appear in our minds.

      Delete
    3. Sacha, kid-sib asks "What does thoughts popping into your head have to do with free will?"

      Marcus, yes, the original Libet experiments had weaknesses, but recent more sophisticated experiments are confirming this findings. (Could you please explain for kid-sib what all this Libet/free-will business is about?)

      And kid-sib says: "schema?" What does that mean? and what does it explain, and how? I thought the idea was to look for causal mechanisms that can do what Isaure can do, so that we can find out how it is done. (Is a "schema" a causal mechanism? how does it work?)

      And what does all this have to do with whether cognition is computation?

      And what is computation, anyway?

      Delete
  3. “What has to be going on inside our heads that enables us to successfully learn, based on the experience or training we get, to identify categories, to which we can then attach a name?” I would like to comment on this part of the reading. How we learn language is still an ongoing debate, where two schools are against: Chomsky’s naturalistic theory vs Skinner’s behaviorist theory. However, a theory that seems to be getting less attention is the one put forward by Saffran et al. in 1996: the statistical approach. This theory explains that children get used to hearing recurrent speech patterns and are able to infer that this repeating sequence must be a word. He put his theory to test and showed that it takes surprisingly little time to habituate an infant to a new word this way. For example, if you habituate an infant to hearing the sound "ba" always followed by sound "by", he will grasp that "baby" is a word. This approach is interesting, because it gets us closer to a mathematical explanation of language, which would in turn make it easier to model on computers.

    ReplyDelete
    Replies
    1. The Chomsky debate is about grammar, not phonology, and we'll get to it in week 9. But since we have no secrets, I can tell you now that the debate is about whether certain grammatical rules (not all grammatical rules, just a special set called "Universal Grammar" "UG") can be learned by the child (whether Skinnerianly or statistically) the way we learn all other categories. Because it not, we must be born already knowing UG.

      Delete
    2. It is interesting that you point out Saffran’s statistical learning approach to language acquisition, because another approach arising from this is Marcus et al.,’s (1999) approach of abstract rule learning (which is thought to not replace Saffran’s approach, but be another additional mechanism through which we learn language – but I won’t get into that). In short, infants under a year of age are able to learn simple abstract rules from sequences of syllables/speech sounds (e.g. “lo fi fi” and “ga ti ti” follow an ABB rule) and then apply those rules to completely new sequences of syllables to recognize those which follow the ABB rule and those which don’t. This seems to suggest that we have some sort of underlying mechanism for learning grammatical rules where we extract these rules from the input we hear and can then recognize what new input is grammatical and what is not, and subsequently produce our own novel grammatical speech. So perhaps it is not that we are born with this UG in the sense that we already know syntax and that, in English, a sentence must follow a Subject-Verb-Object rule, but instead that we born with the capacity and propensity to learn grammar.

      Delete
  4. “A misapplication of Wittgenstein (1953)...is to conclude that if we cannot introspect the rules for categorizing things...then those rules do not exist. A more valid inference is that cognitive science cannot be done by introspection.”
    I agree with the statement that cognitive science cannot be done by introspection. Introspection is an insufficient tool in the search for the complex actions underlying cognitive processes. For example, we have all been in conversations where a shareable thought comes to mind, but quickly disappears before we can share it. If you turn to introspection in hopes of recovering it, you are likely to be unsuccessful. Introspection does not give you access to the processes underlying the thought and therefore doesn’t allow for thoughts-on-demand. We cannot understand how our brain arrives at certain conclusions using introspection; this method only allows for intelligent guessing.

    ReplyDelete
  5. Introspection clearly does not allow for us to figure out what exactly our brains are doing when we think however, I think that the issue actually begins well before we begin to think about our own mental processes. I was struck by the issue of “picture completion” which the article mentions as present in “all conscious cognition” and how this might be at the root of why introspection fails. When we recall something via mental imagery (ie: we see the thing we are trying to recall, like a place you visited as a kid), our memory of that event can be incomplete or more importantly modified. When we recall memories, they become unstable and this leaves room for them to be modified. The brain has a tendency to add false elements into a memory when it is lacking them, however, as we recall these “memories” it all appears consistent and complete and we believe them to be accurate. Thus, the data that we use to come up with theories of how we think is not accurate and in turn anything that we observe or theorize from that data will be as flawed and incomplete as the thoughts which generate them.

    ReplyDelete
  6. “The question is: "How are we able to learn words, as shaped by our reward history? What is the underlying functional capacity?" (Chomsky called this the "competence" underlying our "performance"). The answer in the case of syntax had been that we don't really "learn" it at all; we are born with the rules of Universal Grammar already in our heads. In contrast, the answer in the case of vocabulary and categories is that we do learn the rules, but the problem is still to explain how we learn them(…”)”
    In Chomsky’s consideration of the real questions that Behaviorism failed to ask(those in the above quotation), these answers for how we learn syntax and vocabulary learning may be relevant to the fact that cognitive functions cannot simply be made up of computational processes, contrary to what Pylyshyn had proposed. Though both syntax and category learning represent cognitive functions and computational tasks, the idea of universal grammar rules postulated by Chomsky is perhaps comparable to the set of built in commands for manipulating binary text that characterizes modern computers. As for vocabulary and category learning, the very fact that we have to learn the rules may be representative of the more dynamic processes that have yet to be described. Chomsky’s contributions are especially significant in this respect, as they highlight the necessity of considering dynamic and computational cognitive processes in explaining how humans perform computational tasks. Though it is clear the Pylyshyn failed to acknowledge this, it is uncertain what is meant by the criterion for something to be considered “cognitive” ?
    “what could be modified by what we know explicitly.”

    ReplyDelete
    Replies
    1. Sofia, you're right that Pylyshyn's criterion for what counts as "cognitive" (namely, "cognitive penetrability") is not very helpful, the analogy with universal grammar (UG) does not help either. UG (as we'll learn in week 8) is a set of syntactic (hence computational rules) for generating syntactically correct (UG-correct) sentences in any human languge. Computation is syntax. No problem there.

      The problem with UG is that it is unlearnable from the data available to the language-learning child, and hence UG must be innate. Most other categories, in contrast. are not. They are learned, and require a learning mechanism, not a built-in set of rules for categorizing anything and everything we may one day want or need to categorize.

      Delete
  7. “Behaviorists had rightly pointed out that sitting in an armchair and reflecting on how our mind works will not yield an explanation of how it works” This makes me thing of Martin Heidegger and how he argues the only way to know the human mind is to observe it in action and in all its subjectivity. Often times we try to abstract ourselves out of the human experience to talk about the human experience but wouldn’t the best way to understand consciousness is to simply be a conscious being in the way that we are and just pay attention to that?

    ReplyDelete
    Replies
    1. I do agree with the idea that it would be helpful to observe human behavior in action, nonetheless, it is difficult to do this with consciousness as this is not something easily observable. However, the problem with introspection is that the backtracking we use to come to conclusions is more often wrong than right. The processes we used to described our third grade teacher in class on Tuesday were obviously not accurate or what was truly happening in our brains (remembering which school we went to, remembering which class we had, then remembering the teacher etc.). The way our perception of events can become skewed or biased is also fascinating and important as it comes up often during eyewitness testimony. In this example, what we might truly believe happened and what actually happened might differ significantly.

      Delete
    2. Daria, wasn't Heidegger inviting us to do introspection (phenomenology) rather than observe behavior?

      Kayla, the main problem with introspection is not that is's biassed but that it is uninformative: It does not help us explain how and whe we can do what we can do.

      Delete
  8. Zenon's belief that "The implication that words and propositions were somehow more explanatory and free of homuncularity than images" I take issue with. Again, if we can not introspect into our own thought process and extract the methods and underlying processes here, how can we know that pictures are less of a computation than words and numbers. Who are we to define what is more or less "cognitive" if we cannot even understand the basis of the computations that goes on inside our brains.

    ReplyDelete
  9. "It had to be admitted that the processes going on in the head that got the job done did not have to be computational after all; they could be dynamical too. They simply had to do the job." This is such an important point! Not every process that we as humans do is computational, or at least in step-wise processes.

    It calls into discussion the impulsive behavior of humans, and the feeling of "regret." Were we to simulate a human brain in an AI machine that passed the Turing Test, would it be capable of making impulsive or "incorrect" decisions?

    ReplyDelete
    Replies
    1. What is computation? Before we get into whether it can be done impulsively (why not?). Isaure can express regret in words and behavior. But whether she feels it (or feels anything) is another matter...

      Delete
  10. “So the TT-passing program is no more cognitive than any other symbol system in logic, mathematics or computer science. It is just a bunch of symbols that are systematically interpretable by us -- by users with minds. It has again begged the question of how the mind actually does what it does – or rather, it has failed to answer it.”
    I might be completely off base with this but Searle’s thought experiment made me come to a different conclusion. I agree that the Turing Test wouldn’t allow us to tell whether the thing we were dealing with had a mind, but I saw it as a failure of the test itself, not a failure of computation. That’s because the TT has a behavioural criterion to pass it which creates several problems. I think that criterion itself is why the focus is on, essentially, tricking the experimenter (how good at pretending to be human is this program?) But that doesn’t mean it’s a valid test. The TT emphasizes functionality over explanation when explanation is the very thing we are looking for. The experimenter draws a conclusion without knowing the explanation of how the program works, which I think is misguided – knowing whether someone is a real artificial intelligence (capable of doing everything humans can) will necessarily depend on having that explanation.. Behavioural output is not a good criterion if we are concerned with knowing whether something has a mind or not - that is the failure of the Turing test! The TT takes behaviourism seriously as its criterion when we know that behaviourism is flawed - because pretending to be human isn't the same as being human. If we had a good explanation of how the mind worked, would there even be a need for a Turing test?

    ReplyDelete
    Replies
    1. The TT is not about tricking anybody. It's about generating Isaure-capacity -- real Isaure capacity.

      Before we can discuss Searle's argument (in 2 weeks) we first have to settle on what computation is and what it can and can't do, and how and why.

      Turing made the TT purely behavioral because what Isaure (or parts of Isaure in her head) do is all we can observe and explain. Whether and what she feels is another matter.

      If we knew how the mind works, we wouldn't need the TT! The TT is for testing our candidate causal mechanism to see whether it can do what Isaure can do. If it fails, it's not the TT that fails, but the candidate mechanism. Back to the drawing board.

      Delete
  11. “But there is one substantive issue on which I think Zenon has quietly conceded without ever quite announcing it, and that is symbol grounding—the ultimate question about the relation between the computational and the dynamical components of cognitive function (Harnad 1990).”
    Computation, which is defined as “rule-based symbol manipulation” (Harnad, 2008), is argued to be insufficient for cognition—namely, the attachment of meaning to symbols themselves. Yet, if computation is necessary for cognition, I would be curious to know at which point computation becomes cognition. Moreover, is computation necessary for cognition? If computation and cognition are inseparable then how does the brain know how to perform a simple computation task without evolving into cognition?
    I would also like to clarify as to why memorization and symbol manipulation in Searle’s experiment creates an “entirely computational system”. If there are multiple computational processes take place in Searle’s brain and Searle weighs them differently to interpret the meaning of symbols, could that not constitute a “dynamical” and thereby cognitive process? In other words, the brain would conduct multiple computational processes and they would differ in strength. But as I was typing this out, it begs the question – how does the differential weighing of computational processes take place and why?

    ReplyDelete
    Replies
    1. 1. Cognition does not become cognition; it is a (potential) component or part of cognition -- rather like action potentials. Except that computation is not a structural component (hardware) but a functional one (software).

      2. Computation is the rule-based manipulation of symbols (arbitrary objects) on the basis of their (arbitrary) shapes (e.g., 0, 1).

      3. The rules are called "algorithms" or "programs" (software).

      4. Whatever is doing the symbol manipulation (including Searle in the Chinese Room, when we get to him, in two weeks) is a "computer."

      5. "Hardware" does the computation (symbol manipulation), but the computation itself (software) is independent of the hardware, in the sense that countless different hardwares could perform the same computation (execute the same software). (A desk calculator can calculate 23 x 27 and so can a person: different hardwares, same softare, so same computation.)

      Delete
  12. I’m not sure I understand what is meant by a dynamic approach to the visual rotation judgement. In the case of an analog rotation-like process, would the brain not still be taking an input, the starting location, and then applying some sort of procedure to get an output, the final location, which would then be a form of computation. I can see how physically rotating could be non-computational, but in the case of a mental representation there doesn’t seem to an object to physically rotate, it’s all manipulation of information.

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. “Behaviorists had rightly pointed out that sitting in an armchair and reflecting on how our mind works will not yield an explanation of how it works” (Harnad, 2009). Looking at introspection, I agree that thinking about what we know and how we know it is not beneficial in understanding the mechanisms behind how a thought comes into our head. That being said, I think it’s important to note the concept of memory cues and how a certain thought process begins and ends. As Hebb’s theory stated, “neurons that fire together wire together,” and so, on a molecular and neurological basis, can’t we say that we understand how a thought arises?

    ReplyDelete
  15. I find the point raised about universal grammar in computation at 70, cognition at 20 interesting. “ We are born with the rules of Universal Grammar already in our heads”. From observations, we can see how children are predisposed to learn vocabulary and can extract from multiple examples grammar rules. But I do not see how there is an innate universal grammar. As children are exposed to words, they learn them. After they have gained enough vocabulary they try to extract some rules with the regularities the hear. The learning of those rules leads to a dip in their ability to speak because they make mistakes by overgeneralizing the rules they learned. It is clear when they conjugate irregular verbs, children would say flyed instead of flew. Yet previously, they learned and used flew correctly. They need to relearn the exceptions because the rules that they come up with is through trials and errors. They do not seem to know the rules beforehand. They seem to make them as they go and then they over apply them to see the limitations of the rules.

    ReplyDelete
    Replies
    1. It’s true that children don’t automatically know what the rules of grammar are but I believe that they do have something which allows them to deduce that there is such a thing as grammar and then construct its rules for themselves. Adults don’t really go around telling young children grammatical rules; they hear them in day-to-day speech and then form possible rules in their heads. I think that’s what Chomsky is getting at with the Universal Grammar. Children are predisposed to notice similarities in plural formation in English, just like they might be predisposed to notice more verb conjugations in French since it is harder in English to form a plural than to conjugate a verb and vice-versa in French. This Universal Grammar seems to me more like a system that allows them to catch occurrences of certain grammatical rules in preference of others depending on the language they are learning, leading to them achieving maximum proficiency in that specific language much faster than if they had to be told every grammatical rule as they encountered it.

      Delete
    2. If I understand correctly, Chomsky’s “poverty of the stimulus” argument is stating that the input available to the child isn’t rich enough for the child to acquire ALL features of their language, not that the child is unable to acquire ANY features of the language. If the features are not in the input, then there is no way we learned them through reinforcement. Since it is not learnable from the outside, then it must already be built into the mind. I agree with you that there is a process of learning going on, but I think the process is two-fold. Babies are born with a Universal Grammar that contains all rules for all languages. As they learn, what they are really doing is getting rid of most rules and keeping only the ones that are relevant to their mother tongue. This is the first type of ‘learning’. The second type is forming rules as they are exposed to them, as you have suggested.

      Delete
  16. ““What was the name of your 3rd grade school-teacher?” When we triumphantly produced our respective answers, he would ask “How did you do it? How did you find the name?”.... “Beware of the easy answers: rote memorization and association.”
    So the problem with introspection is that it is inherently uninformative because it can’t really explain how we know that our 3rd grade teacher is our 3rd grade teacher? While I understand that we can’t exactly explain how we make ourselves produce this information, don’t we kind of try when talking about Hebb’s idea that neurons that fire together wire together? So if someone asks the name of your 3rd grade teacher, some association that has previously been formed when you repeatedly went to class in the 3rd grade with the same teacher was produced and you are now able to activate those neurons to find the answer. In the same way, could we say that our ability to reason or to learn from experiences is also a result of certain paths of neurons being reinforced so they are more frequently activated, shaping us in certain directions or am I guilty of trying to explain this in an introspective manner?

    ReplyDelete
  17. Although not everything is pre memorized as stated, is memory a type of computation because I feel like memory is the foundation from which information is expanded on.i If not, is there a way to distinguish something that has been retrieved from memory compared to something that has been computed. For example, take categorization of objects. Is it that a chair is perceived as a chair because it has been memorized that a chair looks a certain way (has 4 legs, etc). Is it the retrieval or extraction of this information or memory that is considered computation or is the memory itself considered computation. Is it ever going to be possible to figure out how these memories are formed? What happens to memories that are unreliable making the information being computed being unreliable. As stated, introspection can only do so much so how do we go and approach the way in which these initial memories are formed and how accurate they are?

    ReplyDelete
  18. The article argues that “computation alone can always be shown to be noncognitive and hence insufficient”, and “there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics: not propositions describing them nor computations simulating them, but the dynamic processes themselves, as in internal analog rotation; perhaps also real parallel distributed neural nets rather than just symbolic simulations of them.” (Harnad, 2009)
    My understanding is that the dynamic processes described above are the dynamics of neural circuits and pathways, which form connections and associations between symbols created by computations. With that being said, does it make sense to conclude that, unlike pure computation which an external element would give commands/inputs, there’s no such thing in our brain that directs us to recognize, categorize, recall, interpret or reason? Is cognition basically the result of the brain reacting to certain sensorimotor stimuli, and further associating these stimuli with relevant symbols by neural dynamic processes?

    ReplyDelete
  19. "So the TT-passing [rogram is no more cognitive than any other symbol system in logic mathematics or computer science." (Harnad, 2009)

    In "Cohabitation: Computation at 70, Cognition at 20", Searle's Chinese Room thought experiment is utilized to reexamine the idea of cognition as computation. I find the issue of infinite regress particularly illustrative regarding the issues we encounter when trying to determine how cognition occurs. By relegating the task of understanding cognition to simply creating a machine that can pass as human, we are completely failing to address what is referred to as "autonomous, non-homuncular functions in addition to computation". These "dynamical functions" are at the root of the hard problem and it seems to me that the paper highlights the still elusive nature of solving this problem.

    I find the symbol grounding problem to be fascinating, although very frustrating. In particular: “how can symbols in a symbol system be connected to the things in the world that they are ever-so-systematically interpretable as being about: connected directly and autonomously without begging the question by having the connection mediated by that very human mind whose capacities and functions we are trying to explain!” Is this saying we can't understand symbol grounding as handled by the human mind because we do not understand the human mind? Will we first have to understand the human mind to understand symbol grounding, or would it ever be possible to understand symbol grounding (through machine models perhaps) to elucidate the workings of cognition?

    ReplyDelete
  20. In this paper, the professor talked about the symbol grounding problem being the true obstacle lies between understanding computations and its dynamical components and the best solution to it being a real TT test machine that not only would trick people thinking into its authenticity but also the machine itself can mediate the connections between symbols and reality, directly and autonomously. However, another major difference between humans and robots are the fact that humans have a sense of ourselves versus the external world. This sense of self is coherent and clear, which might even give rise to the sense of dominance of ourselves, in another world, our "free will". Computers start reading and computing by us human pressing the start button. Whereas we conduct actions not because the command being pressed by somebody else, but because of our own, or at least we feel it this way. Is this phenomenon also part of the cognition that is waiting to be explained? If so, will a TT test machine one day eventually reaches this level that it would think it has free will as us human do?

    ReplyDelete
  21. “Computation is rule-based symbol manipulation” (Harnad, 2005).

    Harnad recaps Searle’s Chinese room experiment, in which Searle proposes the Turing Test in Chinese. If Searle serves the computational system himself, he can learn and apply the appropriate rules of the symbol manipulations, and thus even without understanding a word of Chinese, he is able generate all the correct outputs. It therefore would appear that Searle, a non-Chinese speaker, understands Chinese (i.e. he passes the Turing Test).

    But this is merely a simulation of understanding Chinese, as Searle still does speak the language. So yes, we have achieved successful symbol manipulation of Chinese symbols, but this brings us back to the computation vs. consciousness distinction Harnad addresses, “If symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?” (Harnad, 2005). This leads me to question the distinction in passing Searle’s Chinese Turing Test and actually speaking Chinese.

    ReplyDelete
  22. It’s tough for me to see how Zenon believed that the hardware/software distinction could explain the mind/body problem.

    If the software (mind) is independent of the hardware in the sense that the same software can be run on different kinds of hardware (bodies/brains) and still result in the same computational states, how does this explain that everyone’s mind works differently. The whole point of software is that it works the same on all hardware. If “mind” was software then it would work the same way on everyone’s hardware (brains), which is evidently not the case. Yes, people’s minds might work very similarly, but I’d venture to say that no two people have the same mind.

    To me this seems like a hard thing for Zenon to have overlooked. Maybe he viewed it a different way and would respond by saying that the software (mind) doesn’t work differently in different people: that the software codes for the overall ability to engage in different types of processes and how each person chooses to engage is up to them (which interestingly enough would leave space for free will).

    ReplyDelete
  23. When asked to think of the name of our 3rd grade teacher, imagery theorists claim that we first conjure up the image of her, then identify that image. This was said to bypass the true problems of cognition: how do we conjure this image, and how do we identify this image? However, is it not simply a case of Hebb’s law which states that neurons that fire together wire together? The more we bring the memory of this teacher to the surface, the stronger the synaptic connections between her image, voice, categorization, and all other sensory experiences with her, become. If this were the case, the answers to the above questions would be obvious; we conjure and identify the image through memory consolidation.

    ReplyDelete
  24. “There is still a Skinnerian uneasiness about counting the biomolecular details of brain implementation as part of cognitive science.”

    “If we are to explain our cognitive capacities, we must somehow come up with explicit hunches about how we are able to do what we can do, and then we have to test whether those hunches actually work: whether they can really deliver the behavioural goods.”

    I would like to explore the idea of the hardware-independent of software. While I agree that if we are looking for functional explanations for our cognitive capabilities, we must not be fully focused on the physical implementation of functions, I do see a role for the study of the structure of the brain in cognitive science. In many scientific fields, the “hunches” investigators get about function often come from structure. The link between structure and function in physiology, biology, chemistry, etc. is undeniable, and researchers have often made true functional inferences based on structural observations. Therefore, I think that the study of neuroanatomy can teach us a lot about cognition, and I am wondering why there is still an “uneasiness” associated with this idea.

    ReplyDelete
    Replies
    1. Myriam, I think you are right in that the study of neuroanatomy can teach us a lot about cognition in some sense but I believe that the uneasiness comes perhaps from the fact that even a physiological/neuroanatomy explanation of cognition does not really address “Hebb’s point […] about internal processes in the head that underlie our capacity to do what we can do”. I’m thinking of the “what was the name of your third-grade teacher?” example. Yes, you could study the physiology of what happens in the brain, what area of the brain is more active, neuronal responses, etc… when someone is answering the question but would that be able to answer the questions raised in the article: “How did our brains find the right picture? And how did they identify whom it was a picture of?”
      The physiological/neuroanatomy based answer still doesn’t explain why or how we see those images, how our brain went about finding the information, or how it went about identifying it. Then again, I am not up to speed on the study of either physiology or neuroanatomy – so it is very possible I am wrong! Just basing this off my understanding of the reading, that while this approach gives us information about cognition it doesn’t fully answer the easy question or really touch the hard question.

      Delete
  25. When Searle conducted the TT in Chinese, he only tested the symbol manipulation part of the computational system. Even if the meaning of the symbols are only in the head of the users, he could not pass the TT if he has no reference to the Chinese reality. Thus, since “the TT-passing program is no more cognitive than any other symbol system in logic, mathematics or computer science”, there is no reason for it to be considered a success in the domain. An dynamic relationship must be present in order to consider the cognition to be achieved. If there is no link made between the computational aspect of the program and any real-world experience, there is no way that a system can successfully cognitize the same way as a brain does. There will always be a missing part. For example, I do not think that culture or beliefs such as religion could be internalized in a way that humans do by any system. It would be easy for it to explain and know it, but it won’t be able to truly believe, understand and have faith in it

    ReplyDelete
  26. As a note, I thought the idea of "anosognosia -- the 'picture completion' effect that comes with all conscious cognition -- that we don't notice what we are missing: We are unaware of our cognitive blind spots -- and we are mostly cognitively blind," was a brilliant inclusion to the piece. It made understanding the concept of the homoncular problem easier, which effectively puts one in a position where they analyze the "decorative accompaniments" of cognitive function and fail to realize the implicit functions that would shed light on the 'how'. The implicit function is the computation which is impenetrable to introspection.


    "The criterion for what was to count as cognitive was what could be modified by what we knew explicitly" what is this supposed to truly mean? What would be an example of something that we know explicitly, because my understanding from the lectures that the only two things we know explicitly to be true is the cogito ergo sum and mathematics. Perhaps I am interjecting a concept into this that doesn't belong, hence the need for clarification, because by my definition of 'explicitly known' that would not leave much to be categorized as cognitive.


    The author also introduces a term "ungrounded symbol systems" which he goes on to say "are just as open to homuncularity, infinite regress and question-begging as subjective mental imagery is." However, he failed to define what would constitute as an ungrounded symbol system, and I would appreciate clarification on that idea!

    ReplyDelete
  27. I think that either my definition of cognition isn’t correct, or I don’t properly understand Pylyshyn’s argument, but it I have a basic objection to it: Isn’t his computational theory of cognition basically homuncular?

    Consider the Turing machine. It is configured with a set of rules to perform symbol manipulations based on whatever state it’s in. Maybe that produces an output, but this isn’t what we think of as thought. If we ask a Turing Machine to solve a complex problem and it writes it out on paper, the Turing machine hasn’t done any thinking; it doesn’t have any awareness of what its output is. We still need somebody to pick up the piece of paper and read the output for it to have any meaning. If we include the subjective experience (i.e. “feeling”) in the definition of cognition, then thinking of cognition purely as a set of computations isn’t satisfying. Although we probably don’t notice the result of every computation we perform, we certainly are aware of some of them (i.e. we “feel” something when we recall our 3rd grade teacher’s name). The idea that cognition is purely computational still requires a little man to be picking up the strip of paper from the Turing Machine in our heads and reading it out for us to “feel” the result of that thinking.

    Maybe I’m missing the point, but it feels to me like computational theory of cognition just a more roundabout version of the homuncular explanations it was supposed to outdo. Also, as a side note: isn’t it a bit circular to propose a functional explanation of cognition which contends that cognition consists of computations which are cognitively impenetrable?

    ReplyDelete
    Replies
    1. I've realized, having read further, that my question is essentially the argument the rest of the article is making, which best encapsulated by Searle's argument. Basically, it's possible to perform TT-passing computations, without a subjective understanding of them, meaning that computations don't explain cognition in its entirety.

      Delete
  28. In Searle’s Chinese Room argument, he distinguishes the difference between syntax and semantics and how it proves that the Turing Test is not decisive. Searle states that if he was the one who implemented the same Turing Test (T2) for symbol manipulation in Chinese, then he would pass it if he used the correct syntactic rules. But this does not mean that he actually understands Chinese or knows what he is talking about. To know this, he would have to be able to understand the semantics of the syntactically correct sentence. I wonder if this implies that syntax and semantics are two autonomous systems. If so, then the meaning of a sentence can be explained as the product of the principles of syntax and the principles of semantics. Without one another, the sentence would be meaningless. Theoretically, if the syntax-semantics interface was implemented, you would be able to understand the language but practically, it does not seem feasible.

    ReplyDelete
  29. I thought it was interesting that you brought up the hardware software distinction as being thought of as a potential solution to the mind body problem because (at least I think) it makes use of only one theory of mind, dualism which is the theory that Descartes popularized that is now so widely discredited. Just as Descartes thought it could have been a simple solution to say we have a mind that occupies our body, the hardware software distinction is too simple. This, to me, explains why we are unable to observe our cognitive functions directly through introspection; a computer code can reiterate upon itself recursively but our mental processes are unable to do the same.

    ReplyDelete
  30. Your article on computation reminded me a lot of Husserl’s Vienna lectures in which he mentions authenticity and inauthenticity. In short, we could compare the Turing Test to be inauthentic computation as it is mediated and symbolic. However, if I understand correctly, cognition can be defined as authentic computation, being immediate and intuitive. Hence, the problem lies into making something ungrounded and inauthentic into something authentic and unmediated by the programmer — as you rightly declare in your conclusion.

    ReplyDelete
  31. “If the mind turns out to be computational, then not only do we explain how the minds works (once we figure out what computations it is doing and how) but we also explain that persistent problem we have always had (for which Descartes is not to blame) with understanding how the mental states can be physical states: It turns out they are not physical states! They are computational states.”

    While I agree with the argument that Zenon does not solve the mind/body problem, I am struggling to understand how the argument that the mental states are not physical states – and instead are computational states – helps us to move forward with this question. As I understand this sentence, we are just altering the name of the ‘state’ while still facing the same issue of whether we are dealing with the mind/body as one or two entities. I understand that the software can be run on many different hardware all while the computational states are the same, so is this simply suggesting that they are one and the same? This is not so much an insightful question as it is my questioning of my ability to follow the logical argument!

    ReplyDelete
  32. “The root of the problem is the symbol-grounding problem: How can the symbols in a symbol system be connected to the things in the world that they are ever-so- systematically interpretable as being about: connected directly and autonomously, without begging the question by having the connection mediated by that very human mind whose capacities and functioning we are trying to explain?”
    Computation, the rule-based manipulation of arbitrary symbols, seems to be observer dependent - it must be interpretable/interpreted in order to have any meaning. If cognition is understood to be computation, then where’s the interpretation coming from? Semantic interpretation is projected by us, humans, onto symbols. You can certainly have software built to interpret other software, but doesn’t that lead to, as so nicely stated in the article, computation all the way down? How can we apply a system to symbol grounding that isn’t homuncular, computational, or begging the question? I think this passage highlights one of the more important challenges we face, which may help us get from computation (only symbols) to cognition (includes meaning).

    ReplyDelete
  33. "Introspection only reveals how I do things when I know, explicitly, how I do them."

    I wonder if introspection is only useful for things we know with certainty, or if it could lead to the uncovering of previously hidden mental processes. Using the "What is the name of your third grade teacher" example, I wonder if actively paying attention to your information searching capacities would uncover the "how" of how or brain surfaces this information.

    "A successful cognitive theory must make what is done implicitly explicit, so it can be tested (computationally) to see whether it works."

    I think this concept of testability is really essential to the debate on how we should approach questions about brain function. It is one thins to introspect and observe how you generate thought, and it is another thing all together to develop an explicit theory which can be tested. An essential part of science is the testability of a given theory, and further, the opportunity to disprove it. It is because we hold scientific theories to such a high standard that they have the value which they do, and I therefore think that the explicit, testable nature of a theory of cognition is critical to its success.

    ReplyDelete
  34. “[I]ntrospection can only reveal how I do things when I know, explicitly, how I do them, as in mental long-division. But can introspection tell me how I recognize a bird or a chair as a bird or a chair? How I play chess (not what the rules of chess are, but how, knowing them, I am able to play, and win, as I do)? How I learn from experience? How I reason? How I use and understand words and sentences?” (Harnad 2005).

    What is the dividing line between tasks like performing long-division and recognizing a bird as a bird or a chair as a chair? This is one of the questions cognitive science attempts to answer. My initial response to this question is that one is formally taught because its principles are not innate to the uneducated mind, and the other is simply the natural result of perception. Upon further reflection, I realized that it is the structure of the tasks that differentiates them. The former kind of task (ex: long division, swimming, driving a car) has a formal structure. These kinds of tasks can be taught because they have a structure and rules which are understood and can be reduced to simpler tasks as well as built back up. The latter kind of task (ex: recognition as something as a member of a category, learning from experience, reasoning) has a buried structure. We do not understand how or why we do these tasks because we just do them (to an extent of course, reasoning can be taught at a logical level, but basic reasoning comes innately). It seems a task of cognitive science is to uncover the structure of innate kinds of tasks.

    ReplyDelete
  35. The article “Cohabitation: Computation at 70, Cognition at 20” definitely provided a new perspective on the idea that cognition cannot be all computation. The introduction to Hebb’s example was also interesting, showing us that “cognition cannot all amount to just inputs and outputs plus the reward/punishment histories that shaped them.” There is way more to cognition than we know. For example, the article shows us the difference in memorizing “symbol manipulation rules” and computing them. I liked the interplay of behaviorism and cognition, and Hebb’s point to explain the internal processes that underlie our capacity to do what we do. At the same time, I still think it’s difficult to define cognition, and Skinner’s dismissal of theorizing how we are able to learn, deeming it as as unnecessary, and irrelevant makes it harder to understand the whole concept.
    Another interesting point to note is the symbol-grounding problem. As mentioned in the article, we have trouble associating how words derived their meanings, and how we describe the whole idea of the “meaning” that we try to pair with the symbol, or act. Computation cannot account for the distinct sounds and dialects that we utter, for example. There needs to be a larger criterion for computation in order to account for these differences.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...