Saturday, January 6, 2018

(5. Comment Overflow) (50+)

4 comments:

  1. I find this reading of great help in understanding Searle’s argument, particularly in its clarification of his appeal to consciousness. Though, it appears problematic in the context of the additional information provided. In Searle’s appeal to consciousness it is clear that the Chinese symbols are meaningless to him because he does not understand them. The implication of this is that for meaning to be going on in your head, you must be consciously aware of it. Concern arises in considering this suggestion in relation to the fact that when the brain uses its “know-how” process in picking out its referent, the process is largely unconscious. Though it is fair to say that we are consciously aware once the brain has successfully completed this “know-how” process, I am hesitant to say that the same would apply to the process of arriving at meaning. Furthermore, I was wondering how one would consider the effect of experience in all of this since Searle’s Chinese Room Argument could have been replaced with English if he had never learnt it. Searle’s use of the word “understanding” leads me to believe that experience with the symbol system is perhaps essential in having them be grounded.

    ReplyDelete
  2. (1) suppose that the name “horse” is grounded by iconic and categorical representations, learned from experience, that reliably discriminate and identify horses on the basis of their sensory projections.

    (2) Suppose “stripes” is similarly grounded

    Now consider that the following category can be constituted out of these elementary categories by a symbolic description of category membership alone:

    (3) "Zebra" = "horse" & "stripes"


    Given these rules for grounding symbols in representations that were described on page 8, would it be possible to have a machine, such as Isaure, that could do this symbol grounding, if it could pass T3? To me it seems very realistic, because even machines can learn from experience and can reliably discriminate between objects.

    ReplyDelete
  3. “No homunculus is involved here; simply a process of superimposing icons and registering their degree of disparity. Nor are there memory problems, since the inputs are either simultaneously present or available in rapid enough succession to draw upon their persisting sensory icons.“ (Page 7)


    I don’t understand why there are no memory problems involved in symbol grounding, because surely the individual would have to be able to remember the previous examples of the category in order to judge whether it was a part of that category? In the paragraph following the one cited above, it is even directly noted that “we need horse icons to discriminate horses.” (Page 7) This also suggests that the individual would have to have already been familiar with the icon of a horse in order to then use this to discriminate between horses and other animals or objects. Surely this means that memory would also be required, unless I’m missing something or I’ve misunderstood.

    ReplyDelete
  4. Since a symbol system is completely arbitrary and we decide the rule of it, it is interesting to see that our complete existence can be described and categorized into random groupings that we gave meaning to. Therefore, it is not surprising that the Turing Test does not prove understanding and is all about symbol manipulation. How could we build a machine that has the same references as we have and uses them the same way as we do if our own ways of doing is mostly random. We gave meaning to symbols, we then live in accordance to these and invent connections between them in order to make sense of our world. However, a machine does not need to make sense of the world, only to fool us enough so that we think it has our same understanding of our surrounding.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...