Saturday, January 6, 2018

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or: Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or: https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.


If you can't think of anything to skywrite, this might give you some ideas: 
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445. 
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and futureTopics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

57 comments:

  1. At the end of last week’s reading titled “Minds, Machines and Searle 2: What’s Right and Wrong About the Chinese Room Argument” (Harnad), Prof. Harnad asserted that the CRA opened up new discourse for cognitive neuroscience, and one direction is “situated robotics.” The symbol grounding problem may be solved with a hybrid model in T3.

    In Floridi’s “Solving the Symbol Grounding Problem: a Critical Review of Fifteen Years of Research,” we learn about the hybrid model between symbolic and connectionist systems. But this hybrid model does not satisfy the zero semantical commitment condition, per condition (b) of the Z condition: no form of externalism is allowed.

    Is it not the case that the way we ground symbols starts with our sensorimotor capacity to identify objects/events in the real world, which would imply a form of externalism? So why would it be any different for modeling cognition in AI?
    I don’t see a way around this premise of the Z condition.

    How critical is the zero semantical commitment condition when trying to solve the Symbol Grounding Problem?

    ReplyDelete
  2. “So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. If we use "meaning" in a wider sense, we may want to say that meanings include both the referents themselves and the means of picking them out” The discussion here makes me wonder what kind of thing a rule for meaning is. If there are implicit rules for knowing what something refers to and how, I wonder if those rules are similar to the rules for symbols. Because intuitively, it seems like the two (symbols and meanings) do have similar structures for their rules (e.g. “if this, then that”, input->some transformation->output)… and if they operate under similar rules, I wonder to what extent are symbols and meaning that different?

    ReplyDelete
  3. The symbol grounding problem refers to how symbols acquire meaning and the process by which this grounding occurs is through sensorimotor interaction. As Harnad (2003), says “Meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to”. In the end though, even if a machine which passes T3 is able to do all of these things we can never know what exactly is going on inside its head since the other minds problem bars us from getting any information about the thoughts and feelings of anything or anyone. Such a machine’s apparent symbol grounding ability provides a potential explanation for how we do it ourselves but we can never really know for sure. What I’m a little confused about though is how does symbol grounding account for referents that cannot be acted upon? For example, is symbol grounding at all relevant for words such as “compassion” or metaphorical language like “to kick the bucket” which can indeed be given literal meaning which is grounded in sensorimotor abilities but which can also be given figurative meaning? If figurative meaning is indeed grounded then how does the system reconcile the two meanings when we are presented with that statement?

    ReplyDelete
    Replies
    1. That is an interesting point regarding the more abstract/metaphorical concepts and how referents for those are extracted from the squiggles we describe as words. I think this relates to how he states in the robotics section that groundedness doesn’t seem like a sufficient condition for meaning, since like you said a robot living among us might “have” the sensorimotor capacities that we do but really nothing is going on inside its head, and therefore, how would it be able to understand these abstract/metaphorical concepts that can’t really be understood through sensorimotor (grounding) capacities alone. So therefore, the T3 robot needs something else to be able to understand this. However Harnad does mention that “to be grounded you would need the capacity to interact autonomously with a world of objects, events, properties and states that its symbols are systematically interpretable as referring to”. I suppose that “states” and “properties” could be loosely defined categories for the words you are referring to. But I am also curious as to how exactly these meanings fit into the grounding problem.

      Delete
    2. The mirror neurons that were discussed last week seem like they would be good candidates for explaining concepts such as compassion. We may not be able to explain what the feeling is like very well, but if you can identify when someone else is in that state, and then compare it to what you feel when you act that way, perhaps that is enough to consider the word grounded.

      Delete
    3. Maria, we can't know what is going on in someone else's mind (i.e., whether and what they are feeling) but we can (eventually) know what is going on in their head (T3, T4). once cogsci reverse-engineers it).

      A "peekaboo unicorn" is a horse with a single horn that vanishes without a trace whenever senses or measuring instruments are aimed at it. In otherwords, you can never see it or act on it. Yet you know perfectly well what it maans. Why? Because you know what "horse," "horn," "vanishes" etc. mean. In other words, you know it through a proposition (definition/description) composed by recombiningalready grounded symbols, which thereby grounds "peehaboo unicorn" too.

      Both the literal meaning of a metaphor and the literal interpretation of its figurative sense can be grounded that way too. Of course, like rhyme and rhythm, this loses the poetic sense of the metaphor. But right now we mean meaning in a literal, not a figurative sense (as in "I found that experience meaningful"). Ditto for idioms like "kicked the bucket": the analogy always be spelled out literally, just as a thousand words (or more) can describe any picture (though never quite exhaustively).

      Naomi, meaning = grounding + feeling. It feels like something to mean something, or to understand the meaning of something.

      Oscar, how do mirror neurons explain compassion (or imitation, or anything)? But we can probably ground "migraine" by describing what it feels like (and maybe also its situational and neuroimagery correlates). Why not the same with compassion?

      Delete
    4. Professor Harnad - just to clarify/sum up: are you saying that more abstract words that we use for feelings or for non-existent beings are given meaning because we can employ a combination of symbols that already are grounded to explain it?

      I have one further question, how do we then explain the symbols of function words that we use in these combinations? Does that go back to the idea that some (very few) categories may be innate? Would that mean that they don't need to be grounded?

      Delete
  4. Harnad describes meaning as "a big causal nexus between a head, a word inside it, an object outside it, and whatever "processing" is required to connect the inner word to the outer object."

    I wonder if for a symbol to be grounded, we also require that at least one other person to have the same or similar kind of processing in their heads. In other words, doesn't our way of grounding depend on shared experience and meaning? Imagine if only one person existed in the world and thus symbols and referents were only connected for this one person - is this meaning or is it just matching two stimuli? Contrast this with the way our world actually is, where our meanings are influenced by the aspect of commonality. This is not to say that everyone has the exact same meaning for every single word, especially for abstract words (eg. love) and metaphorical language. Rather, I am arguing that our grounding of symbols is built upon a common denominator determined by a general notion accepted by the majority of us. Of course, we will never know if another person really has the same/similar meaning as us, due to the other minds' problem.

    ReplyDelete
    Replies
    1. This is a really interesting argument, Devona. I agree that it seems the general notion of meaning is a collective one because much of what we learn is through experience. As a child grows up, he or she learns to associate meaning with various objects often guided by a parent or guardian. In this sense, a lot of our general understandings are made possible by the understandings of people before us. I think there does have to be a sort of consensus when applying meaning to more concrete things (such as book, dog, etc.) but when more abstract things are involved (such as emotion, language, etc.) there is more room for individual interpretation.

      Delete
    2. Devona, Wittgenstein suggested (in his "private language argument") that if there were only one person in the world, they could never invent a private language, because there would be no one to correct them when they used a word wrong.

      This is true for private experiences, such as migraines (M) vs tension-headaches (T): if Ms and Ts had no outer signs, only what they felt like, Robinson Crusoe could never know whether what he was calling an M today wasn't what he had been calling a T yesterday. And it wouldn't help to describe it as "steady" vs "throbbing," because there would be no way to chack whether you were using those words the same way each time either.

      This is also called the "problem of error": Wittgenstein thought you needed social feedback to know whether you were using words correctly or consistently.

      On the other hand, for external things (like mushrooms vs toadstools), Robinson Crusoe could learng to categorize, them, where categorizing means doing the right thing with them: eat the mushrooms, don't eat the toadstools. Nature corrects any problems of error for categorization (when miscategorization has consequences).

      Then there would be nothing wrong with Robinson collecting mushrooms and toadstools, and labelling them "M" and "T," and keep referring back -- but why the labels, if he already knows which is which, and there's no one else to tell?

      Tina, the problem of error (corrective feedback: uncertainty-reduction) can be solved for all objective categories, whether concrete or abstract. (All categories are abstract; there are just differences in degree). The essential thing is that there must be a right and wrong (or better and worse) otherwise what one is doing is arbitrary and indeterminate. Some think social feedback is necessary, but it's not clear why; any corrective feedback from consequences should be enough. But the real question is why we would bother to name and describe our categories at all, unless there was someone else to tell or ask.

      Delete
  5. As stated by Luc Steels in his article, "if we want to solve the symbol grounding problem, we next need a mechanism by which an (artificial) agent can autonomously generate its own meanings. This means that there must be distinctions that are relevant to the agent in its agent–environment interaction"

    If the connection between a symbol and its referent is done by sensorimotor grounding, how do we connect symbols to referents that can’t be accessed with the sensorimotor system?

    How could we create an artificial agent that is able to generate its own meaning for words like envy, fascination, and grief. In other words, how could we create an artificial agent that is able to feel and thus classify emotion? I guess this is indicative of a larger question that is: are emotions left out of the symbol grounding problem because they relate to the “hard problem” of consciousness and not the first (which is the one we're trying to solve).

    ReplyDelete
    Replies
    1. Ayal, Your question : what kind of referents would not be accessible by sensorimotor systems is interesting. Intuitively, it is easy to say abstract concepts and emotions but when I took the time to think about it, the same thing applies for adjectives, adverbs, nouns and everything ! We internalize that "chair" is a category. We apply it to many different types of objects ; chairs that may be small, big, yellow, blue... yet, there are all chairs ! How do we learn that they mean the same thing ? There are different things visually but have the same meaning. Similarly, we learn, at some level and to some extent, that certain feelings, behaviors refer to love, fascination, grief. But truly the question comes back to "How could we create an artificial agent that is able to feel and classify emotions"? That is the big question indeed. How is it that symbols in our brains get to be grounded, ie, how do they possess the capacity to pick out their referents ?

      Delete
    2. Yet we do manage to get good enough to agree on migraines vs. tension headaches, just from verbal descriptions and shared experience -- exactly as we do in agreeing on what blue looks like. (The color of the sky and the ocean is close enough; the rest is just relying on the assumption that you and I both see the same color when we view the sky and ocean.)

      Only philosophers puzzle that I might be systematically seeing what you see as yellow when I am looking at things we both call blue, and vice versa. (The "inverted spectrum" puzzle.) Not worth puzzling about; it just reflects the fact that feelings themselves are underdetermined by language and function: any sufficiently consistent mapping should do, if feeling is needed at all. (Basically, feeling and function is incommensurable, though correlated.) This is yet another aspect of the other-minds problem (how you can know for sure whether and what another feels) as well as the hard problem (how and why we feel at all).

      Delete
    3. I think it's interesting that you (Ayal) would start off by asking about "referents that can't be accessed with the sensorimotor system" and then list feelings such as "envy, fascination, and grief". It feels like something to experience any of these emotions and we experience them by interacting with the world around us via our sensorimotor system. Then, whether emotions are left out of the symbol grounding problem, well, no… "envy" written on a paper still has no meaning on its own but comes to have meaning when it is grounded - and it can be grounded. Similarly, to the above example of describing migraines vs. tension headaches or what blue looks like.
      I think this also relates to your last question: "how could we create an artificial agent that is able to feel and thus classify emotion?" First off, we would not need to program the artificial agent to "feel" the emotions to classify them. The idea is that we would aim to create an artificial agent which is equipped with some sort of sensorimotor system equivalent (T3 passing). Even at that point we couldn't be sure it was able to "feel", as is mentioned in the reading, "it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime…could be a zombie".

      Delete
  6. “But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings.”
    I asked a question related to this in class, and didn’t fully understand the answer. I think this reading has clarified the issue somewhat for me. I’d like to try and answer it and expand upon it now. It depends on the following ideas:

    1. We said in class that specific sets of (roughly 1500) words, whose symbols are grounded, are sufficient to ground the remainder of the symbols in a language, without any further interaction with the real world.
    2. The above passage states that a robot could theoretically ground symbols without consciously interpreting their meaning (i.e. it’s a T3 Zombie).
    3. We’ve said in class, and it’s been indirectly reiterated in this text, that a robot needs to be able to pass T3 in order to pass T2. If a robot can’t ground symbols, it’s conversational capacities will always be limited and the limitations will be easy to spot.

    My question was: Can a robot, who cannot pass T3, but who has a minimal set of grounded symbols as described above written into its programming a priori, pass T2?

    I see the inherent contradiction in the question now. If a robot isn’t T3, then it doesn’t have sensorimotor capacities, so it can’t ground symbols in the first place. However, it’s not clear to me whether a T3 robot who acquired a minimal set of grounded symbols, having its sensorimotor capacities removed, would still be able to pass T2.

    My intuition says it’s possible. If you can ground all symbols in a language based on a small subset of grounded symbols, surely that’s enough to make for a convincing pen-pal for a lifetime? What are you missing if you you’re able to ground everything? As was stated above, you don’t need the grounded symbols to have actual meaning; you can pass the Turing Test and not be conscious.

    ReplyDelete
    Replies
    1. At first glance, your intuition seems plausible. An adult human, having lost their senses, would still have a sense of feelings and meaning despite being “locked” in their brain. This is the trouble with anthropomorphizing robots. When you talk about removing sensorimotor capacities for a T3 passing robot, you’re not talking about something equivalent to a human going blind. You’re removing the system that grounds the symbols in the first place and converting it back to a symbol system.

      In Harnad’s model of symbol grounding, he proposes a system which combines symbolic and connectionist systems.
      1. Iconization: Sensory data from a real world thing creates an internal analog equivalent of the sensory projection. Picture the image of a horse that you hits your retina being imprinted/encoded in a network of neurons (that’s how I understood it)
      2. Discrimination: the process of judging the similarity/differences of the inputs/icons
      3. Identification: Assigning (arbitrary) names to the inputs/icons. Icons are reduced to their invariant features (features that remain the same) i.e. categorical representation

      Stages 1 and 2 are non symbolic, and are intrinsic to sensorimotor capacities. Stage 3 grounds the symbols (SYMBOLIC) by creating a connection with their respective icons (SENSORIMOTOR). So from my understanding, to remove sensorimotor capacities would be to remove the analog store of icons and thus step 3 would no longer be possible. If step 3 occurred before you removed the sensorimotor capacities, the grounding would be lost after the process. You can’t just convert the grounded symbols to symbols, and proceed with a symbol system that has “pre acquired grounding”. You’ve just created new meaningless squiggles and squoggles.

      Ok, you say, well let’s keep all that in and make the robot blind and immobile. I’d argue that wouldn’t be sufficient to say you’ve removed the sensorimotor capacities. I think the issue here lies in understanding what ‘sensorimotor capacities’ really are. On a last note, you said “you don’t need the grounded symbols to have actual meaning.” I think Harnad has said that meaning = grounding + feeling, so you certainly need grounded symbols for meaning. Grounding doesn’t imply meaning though, so yes you can have a T3 zombie, but the problem of other minds prevents us from knowing whether or not that is the case.

      Delete
    2. Willem, the minimal grounding set is formally enough to ground all other words via definition alone, but it does not mean we (or a T3) come anywhere near doing it that way in real life, or that we even could. It's just an indication of the latent power of language. In practice, we (people and T3s) keep looking and hearing and experiencing directly throughout life, as a complement to what we learn verbally.

      But grounding is not the same as meaning, though it's (probably) a necessary condition for meaning.
      Meaning = (T3) Grounding + Feeling (it feels like something to understand or mean something: each meaning feels different: 2 + 2 = 4; 2 + 2 = 5; the cat is on the mat)

      If a robot cannot pass T3, it cannot pass T3. If a T3 had grounded a minimal grounding set, why would it not have the capacity to ground every other category directly too? (Remember that we are all just speculating about what it takes to ground categories at T3 scale.)

      And you can't "write in" a grounding: That's the whole point of the symbol grounding problem, that the solution cannot be just more symbols!

      I haven't a clue what happens if you remove all sensorimotor capacities, whether from a T3 robot or a real person... Probably the person dies...

      Christina, I think you might have been saying the same things I said above in response to Willem: Sepculating about removing sensorimotor function from (hypothetical) T3s or real people is just sci-fi speculation: we get out of it whatever we put into it.

      Delete
    3. Professor Harnard: I’m not sure if my intention came through in the question. It’s not sci-fi speculation for its own sake. I’m trying to understand whether achieving a minimal grounded set, and then grounding the rest indirectly via hearsay is enough to pass T2. You said that passing T3 is necessary to pass T2.

      I’m trying to disentangle whether that’s just because sensorimotor input is necessary to ground symbols in the first place, or whether continued sensorimotor ability is required to pass T2. I know you were trying to illustrate the latent power of language–I’m trying to see how far it goes.

      But I think I understand why it doesn't work in light of Christina's response.

      Christina: Regarding my comment about meaning and grounding. I misspoke. I meant to say that grounding isn't sufficient for meaning. I.e. you need something else (namely feeling, as you pointed out) to have meaning. Thanks for pointing out the error.

      Delete
  7. “Once one has the grounded set of elementary symbols provided by a taxonomy of names (and the iconic and categorical representations that give content to the names and allow them to pick out the objects they identify), the rest of the symbol strings of a natural language can be generated by symbol composition alone.”
    My question about this proposition is: how big does the set of ‘elementary symbols’ have to be in order to be able to generate all possible propositions in a language? It would seem to me like it has to be a very large set in order to encompass everything and I wonder how a child could learn all of the elementary symbols and so, become fluent in a language (assuming the poverty of the stimulus argument to be correct). Would the child ever be able to learn enough of the elementary units to make all possible propositions or are we to assume that knowing the full set of elementary symbols grants perfect mastery of the system which no one has and that eloquence and understanding require only a subset of the set of all elementary symbols?

    ReplyDelete
  8. “So the meaning of a word in a page is "ungrounded," whereas the meaning of a word in a head is "grounded" (by the means that cognitive neuroscience will eventually reveal to us”
    In my opinion, this is why the field of cognitive neuroscience is very important in sheading light on some of the primordial questions in the study of human cognition. Unlike the idea advanced by Fodor in last week’s discussion, understanding exactly how the animal brain has developed to function the way it does is critical for understanding our cognition and consciousness. This neural understanding reaches far further than simply understanding how we have solved the symbol grounding problem. As Harnad points out at the end of this publication, even achieving a machine-model capable of passing the Turing test does not guarantee that the system is conscious. Just as in the case of the symbol grounding problem, we cannot advance the field any further without first generating a comprehensive understanding of the only certifiably conscious system we have access to – the animal brain. Without this investigation into the functional organization of the animal brain we will never come close to building a fully conscious electronic model.

    ReplyDelete
  9. I thought that Prof Harnad's "The Symbol Grounding Problem" (2003) was a good, well-explained introduction to the titular issue, but my question now is: have we gotten any closer to defining "groundedness" in the 15 years since this was published, or is the concept inherently undefinable until we figure out the hard problem of consciousness, seeing as explaining the meaning of things is closely linked to that problem?

    ReplyDelete
  10. I am not sure if this article was easier to follow because we had gone through the first half of the terms multiple times in class OR if I am just finally understanding the material. Regardless, in the second half of the article Stevan discusses how there are two properties of the symbol grounding problem. One being the capacity to pick out referents which can only be done through interactions with the environment. The second property being consciousness which is briefly brought up. With regards to property #1, does this feed into what has been said about how a T3 robot is needed to pass the T2 test. T3 robot is needed to pass the T2 test due to the T2 robot not being able to ground symbols since it has never interacted with the environment? This would make sense since the T2 test would require the T2 robot to have some experience in their environment and have received feedback on their environment in order to keep up an email conversation with a human who has interacted with objects and different situations. However, that would be a T3 robot which can handle sensory input.

    ReplyDelete
  11. The discussion of cryptologists is really interesting to me, because they are taking meaningless symbols and ascribing meaning to them based on other experiences without actually knowing the true meaning of them. Further linguistic universals add more interesting dimension because these are seemingly arbitrary in meaning but are consistent across many languages and provide meaning to some of the symbols.

    In using the chinese/chinese dictionary as a tool to learn chinese as the first language, how does this differ than the way we learn language naturally? As children we just hear meaningless jumbles of sounds and slowly start to attribute meaning to them, without having a “definition”. Is this because a chinese/chinese dictionary would not have the meaningless symbols represented by the real life experiences that they map too?

    The inherent connection to experience that is necessary for producing meaning makes me think about someone who is blind thinking about the words “to see”. They can know what the definition is, the symbol to symbol explanation of to see but they will never know what it really is to see because this is an experience they lack.

    ReplyDelete
  12. While it makes sense that the meaning inside a head is grounded, while the words on a piece of paper are not -- due to the sensorimotor interactions that occur in the former, I wondered if these sensorimotor interactions were sufficient enough on their own, or if it requires a mutual understanding created by interacting with other agents (who are also experiencing sensorimotor interactions of their own). While reading Luc Steel's article 'The symbol grounding problem has been solved, so what's next?' he concludes that:
     
    "Each agent builds up a semiotic network relating sensations and sensory experiences to perceptually grounded categories and symbols for these categories (see Plate 12.3). All the links in these networks have a continuously valued strength or score that is continually adjusted as part of the interactions of the agent with the environment and with other agents. Links in the network may at any time be added or changed as a side effect of a game. Although each agent does this locally, an overall coordinated semiotic landscape arises."
     
    It then appears -- to me at least -- that in order for the agents to be able to autonomously generate meaning and autonomously ground meaning in the world through their sensorimotor  interactions, there must be a constantly changing and adapting input from the environment and other agents. In other words, suggesting that "an overall coordinated semiotic landscape arises" seems to be suggesting that in order to ground symbols autonomously, we require input from not just our sensorimotor interactions but the information based on the sensorimotor interactions of others to generate a mutual understanding of the meanings of the symbols. 

    ReplyDelete
    Replies
    1. Hannah, I agree Luc Steel is suggesting that sensorimotor interactions alone are not sufficient to successfully ground symbols, but rather there are semiotic networks (“the links between objects, symbols, concepts, and their methods”) which are generated and constantly modified through interaction with other agents. These semiotic networks become coupled to those of others and become coordinated amongst groups. With this suggested necessity of agent-agent interaction, I wonder if these networks and such grounding can be achieved through sensorimotor interaction with the outside world, but in isolation from other agents (or with highly limited agent-agent communication/interaction)?

      Delete
  13. "In this either/or question, the (still undefined) word "ungrounded" has implicitly relied on the difference between inert words on a page and consciously meaningful words in our heads. And Searle is reminding us that under these conditions (the Chinese TT), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on a page."
    If symbol grounding requires to mind to "mediate between the word on the page and its referent", are the symbols ungrounded when, say, you are reading another language or a word you do not know? I understand that symbol grounding refers to this understanding and processing that is going on inside our heads, which makes me further question weather the meaning inside a computer is grounded or ungrounded. For humans, grounding requires interaction with the environment and learning processes, which brings us back to the question of computationalism and the Turing Test. Even if something can pass T3, it is still unclear if there is consciousness, and therefore experience giving it the capability to ground symbols.

    ReplyDelete
    Replies
    1. I think your question about reading a word in another language is really interesting. Assuming there is no one or nothing with you that can help you understand the word, would that imply that in this situation that word does not have meaning?

      Would this mean that a word is only grounded in the instances that a conscious and understanding being was interacting with it? Or are words grounded as long as individuals exist that understand their meaning?

      In other words, is meaning only present during mediation?

      Delete
  14. “But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings.” (Harnad, 2003)

    Searle’s CRA shows us that Searle could pass the TT in Chinese, solely on the basis of rule-based manipulation of ungrounded symbols, as to Searle, a non-Chinese speaker, the Chinese symbols themselves are just meaningless squiggles and squaggles. Because Searle has no understanding during the Chinese Turing Test, he concludes that a computer implementing the same program doesn’t understand anything either (because of implementation-independence, the hardware is irrelevant). This displays that meaning is not necessary to pass T2; T2 is pure computation of ungrounded symbols.

    Let’s turn to T3 however, which is the robotic version of the TT and is both sensorimotor and symbolic. Because of the sensorimotor component of T3, T3 is grounded (whereas T2 is not). Therefore, in T3 symbols are connected to their referents – and thus the robotic T3 has the ability to accurately recognize and appropriately manipulate the things in the outside world that the words refer to.

    So, as Harnad asks, while groundedness is a necessary condition for meaning, is it sufficient? Can we conclude that T3 has meaning and understanding, just by the knowledge that T3’s symbols are grounded? It appears that because of the other-minds problem, we will never know if T3 feels or understands and thus we will never be able to answer this question…

    ReplyDelete
  15. I think that our understanding of the symbol grounding problem has important implications for passing the Turing test. First, Harnard believes that only a T3 can pass T2 test. However, if I am understanding the symbol grounding problem correctly, it may be the case that a T3 that is able formally manipulate symbols enough to fool a human for eternity may still not understand the meaning of those symbols ( A zombie). What I see here may be a bit of a contradiction. How is it that a robot could pass as a human without understanding the meaning of symbols? I find it difficult to believe that even if programmed with a perfect rule book of correspondences between input and and output a robot would be able to fool us into believing that it is a human without grasping the meaning of the symbols it is manipulating. I think that human language is too meaning reliant to allow this. My point here is that I am not sure that a T3 would pass T2.

    ReplyDelete
  16. Playing in the space between T2 and T3: if we gave a machine photographs, videos, a 3D model, or better yet a perfect [strong Church-Turing] simulation, could we aid the machine with symbol-grounding without it needing real-word sensorimotor capabilities? Or would this count as having T3-robotic sensorimotor capabilities? This question was my first reaction to the article, but upon reflection I wonder if a machine would even be able to ground referents in this way because without an initial grounding to the real word, the machine would not understand the provided references in the first place.

    ReplyDelete
  17. “One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents. […] this is what the hitherto undefined term ‘grounding’ refers to.”

    Symbol grounding is the reason that being able to pass T3 is a pre-requisite for passing T2, the pen-pal version of the Turing Test. As Harnad states, “To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities – the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to.” Without these sensorimotor or “robot” capacities, then, it is impossible for a symbol system to pick out referents, and thus, confer meaning to words. I understand and agree with this argument. However, I am not sure I understand the role of the second property that might be required for meaning, which is consciousness. If there is no way of knowing what functional capacities consciousness can provide, as Harnad suggests, how can we know if it provides any?

    ReplyDelete
  18. I disagree that a word on a computer screen is grounded in the same way that a word in a brain is. Meaning-making happens in both brains and computers, therefore we can say that meaning-making is implementation-independent and congruent with computationlist ideas. However, meaning-making in brains is different than meaning making in computers. Every individual brain is shaped differently, and therefore they are capable of finding different meanings in the same entity. My understanding is that the “software” in this situation is programmed to find meaning, but not programmed to find the same meaning. This is to say that meaning-making in brains is subjective (the software is the same, the hardware can be different, and the outcomes can be different), whereas meaning-making in computers is objective (the software is the same, the hardware can be different, and the outcomes are the same).

    ReplyDelete
  19. Regarding the dynamic process of meaning making in the brain, it is clear that a connection is being made between the inner word and outer object. In the section on formal symbols, the example of arithmetic symbol manipulation is given, where arithmetic symbol manipulation relies on shape, not meaning. This is because even though we are able to make sense of the symbols that are given to us, this “meaning” is in our heads and not in the symbol system itself. Since symbol interpretations can be accounted for by sensorimotor interactions with the environment, I’m wondering , then, whether real-time sensorimotor activity is sufficient for meaning making?

    ReplyDelete
  20. "Robotics. The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing Test, which is purely symbolic (computational), to the robotic Turing Test, which is hybrid symbolic/sensorimotor (Harnad 2000). Meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to."
    If I understood correctly, this passage is arguing that T2 robot is computational since it lacks T3 robotic capacity, which is necessary for groundedness, and groundedness is one of the necessities of meaning. I understand that T3 passed robot is able to ground symbols due to its symbolic and sensorimotor capacities, but I am getting confused about the role of T2. If T2 is just computational, whereas T3 is dynamical, Is it in contradiction with the argument that T3 is required to pass T2?

    ReplyDelete
  21. I think that symbols and picking out the referents are a precursor to meaning. Although the process of meaning is unclear, it is largely dependent on worldly experience which defers from associating a symbol to a referent as that can be done with little sensorimotor interaction ( i.e. reading a Chinese dictionary and associating the word to its definition). Meaning on a deeper analysis would be associating feelings and emotion towards a symbol which is what would distinguish us from T3.

    ReplyDelete
  22. I finally (think I) understand why a machine would need to pass T3 to be able to pass T2. Meanings are "grounded" in the sensorimotor experiences we humans have, and our ability to pick out the referents of meaningless symbols. A machine that could only pass T2 would not have these sensorimotor experiences, and therefore the symbols they are trying to interpret would be ungrounded. They would be unable to communicate coherently, and therefore would fail to pass T2. The distinction between systematic interpretability and meaning is crucial - a computer can formally manipulate symbols that are semantically interpretable, but it cannot be said to derive meaning - something that Searle's Chinese Room Argument makes clear. It would be interesting to know if the symbol system of people without any sensorimotor connection to the external world (blind, deaf, and mute) would be grounded, and if they are able to derive true meaning from the referents in their symbol system. Would they be considered different than a machine that does not have any nonsymbolic sensorimotor experience?
    I agree with the claim that "our brains need to have the "know-how" to follow the rule, and actually pick out the intended referent, but they need not know how they do it consciously." This is obviously true, since we are seldom aware of how a bodily or mental process transpires, yet they consistently occur seamlessly and effortlessly. However, the article also mentions that without consciousness and minds picking out the referents, there would be no connection between scratches on a paper to their referents. Is this to say that once scientists figure out the means by which something ungrounded becomes grounded in the mind, machines will pass T3?

    ReplyDelete
  23. Can we say that the shape of words is what gives them meaning? In any alphabetic system of writing (that is, one that uses letters rather than characters) letters represent sounds which are combined to form meaningful, complex strings of sounds (words). The shape of the letters is what is important, the sound is attached to the shape. This is different than a numeric system, in which there is no sound or meaning attached to a numerical symbol. As Stevan explained in the reading, there is no “two-ness” attached to the symbol “2”.

    I am initially inclined to believe that the shape of words does give them meaning, but there is nothing inherent about the sound that a letter makes and its shape. I could very well decide to switch the sounds of the letters “b” and “d”, and although it would require a mental adjustment to attribute “b” and “d” sounds to the opposite letter, I would eventually learn to read this way. So I reformulate my question, can the shape of words give them meaning, even if the meaning is not inherent to the shape?

    ReplyDelete
    Replies
    1. Hi Anna,

      In some Chinese characters, the shapes of the words actually do give a hint as to what they mean. Take the character for mountain in Chinese "山". It's almost like its a pictograph in that it looks like a mountain with three peaks! However, to be able to know that this character looks like a mountain, we still would need the sensorimotor experience of seeing a mountain to be able to make the connection that it looks like a mountain.

      I still don't think that the shape of the word alone (one with letters rather than characters) can give us meaning though. Yes, they might correspond to a certain phoneme which can build up what the word sounds like. But this sound to us is also meaningless unless we associate it through referents with our sensorimotor capacity.

      Delete
    2. Hi Celine, I didn't know this, that's very interesting and good to know! Thanks!

      Delete
  24. To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.

    To be able to know what a dog is, we must recall upon our memories of our previous interactions with a dog. We might remember memories of what a dog looks like, what it feels like when we are petting it, what it sounds like when we hear is bark. All of these sensorimotor components are stored in our memory and upon understanding what they mean, we must retrieve these experiences to understand the meaning behind “dog”. Thus, it is also important to note that memory is important to ground symbols as well.

    But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings.

    I am confused by this passage, because to me it seems that if the robot has sensorimotor capacities then it would be able to understand the meaning behind the symbols. I think that by being able to ground symbols, most of the work as to figuring out what the meaning is has been already done. Of course, the robot would need a large memory to store these sensorimotor experiences to help remember the meaning, but wouldn’t that be the extent as to how robots can understand? I guess my next question would be that if T4 is even necessary to pass the Turing Test. The T4 can do everything the T3 can, but it has the same neural networks as humans. I’m not sure if having these neural networks would bring about “consciousness” -- the component of meaning that we are missing. We haven’t been able to figure out the functional capacities that consciousness corresponds to in humans yet, so we can’t expect this out of robots that we engineer.

    ReplyDelete
  25. I would like to bring up a point that relates to how we attribute meanings to words. When we use words in conversations or in internal thoughts, there is a dialect process which entails that the speaker and the listener have some common knowledge. In the case of your own personal thoughts, the speaker and the listener would be the same person so this makes it easy. However, when you are engaging in a conversation with another person (face-to-face or via email), the social context of that conversation is what helps the listener make sense out of the sentences you are producing as the speaker, an idea brought up by Steven Pinker (a psychologist who writes about language). This is particularly important in situations where words have double meanings. For example, if I am talking about a trip to the Amazonian forest and tell my friend how “there were so many flies out there”, I am referring to the insect and I assume that, in this context, my friend will understand. On the other hand, the single string of symbols “flies” could have meant other things in another context (ex. time flies).

    ReplyDelete
  26. Meaning = Symbol-Grounding + feeling (what it feels like to mean what you are saying: i.e., what Searle lacks in the Chinese Room, for what he says in Chinese)

    Symbol-Grounding = the capacity to pick out the referent of the symbol in the world (not another symbol referring to it).

    T3 capacity is what grounds the symbols in T3.

    ReplyDelete
  27. “A computational theory is a theory at the software level; it is essentially a computer program. And software is "implementation-independent." That means that whatever it is that a program is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the implementation are irrelevant to the computation; any hardware that can run the computation will do.”
    I think that applying the idea of symbols being grounded or not grounded in the mind brings out an interesting topic to explore. It’s interesting to consider whether various concepts and symbols in artificial intelligence should be grounded or ungrounded. If we are to use the ideas that we learned in class, that cognition is not equal to computation, then according to this logic grounded symbols would not work. I would also add that in many cases what makes people unique and interesting are their ideas, opinions and experiences that are shaped through interactions with their environment. Therefore allowing artificial intelligence to be implementation independent and not all have the same ideas would allow for more freedom of mind.

    ReplyDelete
  28. In the section 1.2 Symbol Systems, I found it to be very helpful to explain the difference between implicit and explicit rules, and that just because a behavior is interpretable as rule-governed does not make it governed by a symbolic rule.

    I found "the interpretation will not be intrinsic to the symbol system itself: It will be parasitic on the fact that the symbols have meaning for us, in exactly the same way that the meanings of the symbols in a book are not intrinsic, but derive from the meanings in our heads. Hence, if the meanings of symbols in a symbol system are extrinsic, rather than intrinsic like the meanings in our heads, then they are not a viable model for the meanings in our heads" this idea to be unclear. I am not sure what the intrinsic quality of a symbol has to do with the viability of the model? I believe the confusion arises due to a lack of clarity on the concept of intrinsic interpretation.

    ReplyDelete
  29. "So the meaning of a word in a page is 'ungrounded,' whereas the meaning of a word in a head is 'grounded' " What exactly is the difference between ungrounded and grounded? Essentially, the word ungrounded is undefined. I get the sense that 'grounded' refers to words as consciously meaningful , but by that very definition does that mean all words/symbols are inert in meaning until there is a brain/program to add meaning value to it? Aka "that sense is in our heads and not in the symbol system" , which would lead to me conclude grounding is categorical perception, but meaning is categorical perception with the quality of feeling. Any insight/clarification is appreciated.

    "Here is where the problem of consciousness rears its head. For there would be no connection at all between scratches on paper and any intended referents if there were no minds mediating those intentions, via their internal means of picking out those referents." So is this where we get the definition that meaning is= symbol grounding + feeling...? As in feeling= consciousness ?

    ReplyDelete
    Replies
    1. To the best of my understanding, grounding is not just categorical perception because you could ostensibly train a system based on a set of rules to identify and categorize environmental stimuli. Until feeling is involved, you could make Searle's CRA argument - where you could be put in a room and have a set of rules in your head to categorize alien artifacts that you have no understanding or recognition of, and you could categorize them in a way that would completely fool a native speaker.

      Even if we could provide accurate categorical perception, there still would be no process of conscious mediation to give it concrete meaning.

      Delete
  30. This clarifies, and even furthers, Searle's Chinese Room Argument. Searle claims that a T2 doesn't understand as humans do, because it is simply connecting 'squiggles and squoggles' to other corresponding ones, as if he were connecting Chinese characters on the mere basis of their appearance. Most can agree that this isn't true understanding, as the meaning of the Chinese character, or 'squiggles and squoggles', isn't understood in-itself, but only in relation to the Chinese character: its lines, strokes, curves, etc. Consequently, what Prof Harnad explains as grounding, which I believe is connecting the symbols (squiggles) to their meaning (reference), must be essential to surpass T2, as well as to pass T3 (as T3 can pass as human consciousness). Grounding is necessary for cognition, as it is the only means through which meanings can be fully grasped and apprehended rather than be perceived as symbols.

    ReplyDelete
  31. in class we have discussed about the one single missing elements for animals to develop language, which is the ability to come up with propositional statement. Animals, as far as we know, are fully capable of sensorimotor categorization and association learning between symbols and meanings. Since language is a crucial element of human cognition, it seems to me that having a robotic sensorimotor system for categorizing symbols is not enough to let AI have language ability, rather, we need something that can render it capabilities to demonstrate propositional statement. My question is, is this where computation would play role, in cognition, to achieve this ability?

    ReplyDelete
  32. I understand symbol grounding as the process of obtaining meaning from arbitrary squiggles. Since thought and language are not meaningless, how do we derive our meanings from symbols? Biologically, Wernicke's area and the angular gyrus are implicated in reading and making words make sense. Wernicke's area is thought to be a huge mental dictionary storing all the meanings to corresponding words, and the left angular gyrus helps us to perceive the squiggles and make them into words. I really liked the example of a dictionary, where one could entrap themselves in an endless cycle of meaningless squiggles without ever finding meaning. I don’t know enough about Wernicke's area to hypothesize, but I wonder if such a cycle is possible mentally as well.

    But from a more philosophical sense, how do we go from nothing to meaning? Is it an innate mechanism in our brains like Chomsky believes? Or is it learnt through our interactions with other people? Regardless, this method of going from ungrounded to grounded seems very dichotomous to me. Is it really possible for there to be only two options: meaning or not? I feel as though there must be some intermediate place of knowledge between the two. For example, if someone knows the meaning of a word (semantics) but not how to use it grammatically (syntax) is that word fully grounded?

    Additionally, I think the symbol grounding problem is very isolated in its independence. True communication is only beneficial if it can connect one person to another. Thusly, the question of perception in symbol grounding is also very salient. If you were to settle on a meaning of a word that is different from my own, our communication would be greatly hindered. For example, if you were to believe that the word "eat" meant "to jump up and down excitedly" I would be very confused if you said your legs were too tired to eat lunch with me. This also makes me wonder, if you ground a meaning incorrectly, is it still grounded?

    ReplyDelete
  33. In class we pondered upon the phenomenon that apes are still unable to speak a language that corresponds to the language humans use. In your article on the grounding problem, you mention that there is a characteristic of language that humans have but that machines are unable to grasp. You say that in order for the system to be grounded it must be "augmented with non-symbollic, sensory motor capacities."

    Could it be that apes and machines, or non humans, lack this capacity of language because their sensory motor capacities are different than humans? Their understanding of the world causes them to behave differently and maybe they do not need the language humans have because they have been able to live and reproduce with a language of their own (maybe picking each other's dead skin off is a type of affection?). Apes probably don't need a language because they have never felt the need to verbalize their sensations? Maybe the only way to really find out is to put a human mind in an ape, but then comes the question of whether this "ape" would really behave as one or behave from the point of view of a human...

    ReplyDelete
    Replies
    1. I think it's more that evolutionarily apes lack the mental capacity to do so and thats why they are unable to communicate verbally... The origins of language itself are certainly unclear but our communicative abilities arose because our brains became more developed (especially in the frontal and parietal lobes). However, it has been seen that new world monkeys (after being taught, obviously) are able to communicate non verbally through sign language. Machines, on the other hand, lack any cognitive abilities. Because cognition is not strictly computation, artificial computers are not able to behave in the same way an ape or computer would.

      Delete
  34. "Meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to"

    Reading this I saw that there were numerous functions that he was referring necessary for groundedness. However I was wondering if there was a minimum threshold for groundedness. Many people have never seen a kangaroo. However if it's described (in physical features and behavior) to them they will most likely be able to recognize one and be able to use that category of animal properly. However they lose this ability when confronted with a similar animal (that can be confused for a kangaroo) such as a wallabee. Here we see that his/her grounded meaning is not perfect. So where is the line where something can be fully grounded and understood and used properly?

    ReplyDelete
  35. what is the brain doing to generate meaning?
    This is very important question and it’s answers we are unlikely to find by studying a computer and this is the problem of computalionalism. we cannot reverse engineer the human brain and substitute it with a machine and then claim that it can think an attempt to understand that because the human mind/brain it too complex to understand. As science tells us, the human mind has adapted and evolved over centuries, and it is interesting to to think about how we have developed spoken language how might this have formed our brain in a particular way, and all of this phenomena being mysterious whereas a machine offers no mystery. Tricking a a person over an email to think it is a real person is not gonna tell us any more about the human mind and our ability to think.

    ReplyDelete
  36. They (humans) can discriminate, manipulate, identify and describe the objects, events and states of affairs in the world they live in, and they can also "produce descriptions" and "respond to descriptions" of those objects, events and states of affairs.

    I’m curious to if the Symbol Grounding problem could be applied to a species that does not have language, but still perform the above actions. For instance, honeybees use specific dances that indicate the direction, distance away, and type of food source to other bees in the hive. These bees then interpret the dance and travel to the location of the food source.

    It seems to me that despite the lack of language, a certain type of grounding had to occur which allowed them to associate things like shaking to distance away and turning to the angle away or towards the sun. These seemingly arbitrary symbols correspond to references in their environment. I would argue that the Symbol Grounding problem itself would be able to be applied to both humans and honeybees. However, the grounding mechanisms would not entirely overlap as the honeybees would have to learn all categorical perception from sensorimotor feedback and would not be able to verbally relay the rules of the categories (as humans can occasionally do). Is it feasible to say that that honeybees use grounding to connect symbols (movements/dances) with their reference (food source)? In addition, if it is true that honey bees ground symbols in a way similar to humans, would they not make good model systems as their range of symbols and references is significantly reduced, compared to something like the Chinese language?

    ReplyDelete
  37. "We know since Frege that the thing that a word refers to (its referent) is not the same as its meaning. This is most clearly illustrated using the proper names of concrete individuals (but it is also true of names of kinds of things and of abstract properties): (1) "Tony Blair," (2) "the UK's current prime minister," and (3) "Cheri Blair's husband" all have the same referent, but not the same meaning."

    I loved this paper. I think it really succinctly walked through the symbol grounding problem and its logical derivatives. I think the most interesting part of the theory is the roots from which it stems. Frege in particular sets this up well in his theory on names and referents.

    I thought your deconstruction of Frege's theory was interesting in reference to his use of (1) Tony Blair, (2) the UK's current prime minister, and (3) Cheri Blair's husband. It brings up a nice philosophical question to ask which way of referencing Tony Blair is the closest to its true meaning. On one side, you could argue that "Tony Blair" comes closes to the true meaning since it describes its referent directly. On the other, you could argue in favor of "the current PM of the UK" or "Cheri Blair's husband" as the best names to represent meaning as they "wear their meanings on their sleeves", or carry features of the referent in their descriptions. I think this dichotomy demonstrates how complex ideas of language and meaning really are. It also allows us to mark the clear divide between words on a page and the meanings we carry in our heads.

    ReplyDelete
  38. Steels describes the SGP as “the question is whether it is possible to ever conceive of an artificial system that is able to invent and use grounded symbols in its sensorimotor interactions with the world and others”. I wish he would clarify what he means by “grounded” since he never explicitly does. From Professor Harnard I have understood that “grounded” means to possess ability to select the referent of the symbol. Does referent = meaning?

    I don’t think so since I see it as a step above, we add feeling to a referent to produce meaning. Since we know a computer or robot has symbols, the component we need to look at is feelings, in order to confer whether they attribute any meaning to the symbols in their system. However, that is the hard problem. We don’t know how qualia happens in us so how can we look for it in a robot or anything else?

    Since T3 capacity is what “grounds” the symbols in T3, I am wondering if passing T3 would require the robot to have semiotic networks i.e. pathways for navigation between concepts, objects and symbols, in order to produce semantic meaning and relations.

    But then there is also the fact that how can we find this out since it requires studying a T3 passing robot? How can we know if there is a T3 passing robot amongst us unless it tells us as it is behaviourally/outwardly indistinguishable from us? We wouldn’t know unless we looked inside it’s head and even then come up arguments regarding whether T3 can be passed without T4.

    Maybe we could try think about it differently, focus on the process and what are the steps required to achieve “grounding” by for example considering if there is an example that shows how we use words that are grounded and what makes a word not enough to reach the ground?

    ReplyDelete
  39. The symbol-grounding problem revolves around the question of how do symbols get their meaning, if on their own they’re just meaningless squibbles and squaggles. From the Chinese dictionary, you would be defining one squiggle along with other ones.  By scrolling through a dictionary, you can understand the word based on the categories you form from your experiences. After all, your whole life is based on experiences and memories, and categorizing those moments make life more meaningful for an individual. Symbol grounding basically grounds symbols in perceptual sensorimotor categories, not in other symbols. For example, miming doesn’t enable you to express propositions. A gesture can be accurate/inaccurate but it can’t necessarily be true/false, where a proposition statement can be true/false. From what I understand, vocabulary is grounded to start with, and built up in hierarchies to form a variety of combinations. Since the reading states that we ground symbols based on the categorization capacity, then we would have to begin when we are very young based on our experiences. As mentioned in one of the previous readings surrounding the symbol-grounding problem, a robot which is hybrid sensory/sensorimotor. This is because “meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to.” If this is true, then it can be inferred that computationalism is symbolic.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...