Saturday, January 6, 2018

(3a. Comment Overflow) (50+)

6 comments:


  1. My understanding; so what would be missing of computer program incapable of producing intentionality is that it does not have the mental processes required to do so. If a program were to have the mental processes similar to the brain then logically, it could produces intentionality. For instance, according to functionalism, mental states, such as those possessed by human beings and animals, are in fact functional states. Computers are machines that implement functions. So, mental states are like software states of the computer. Assuming that fundamentalism is true, in theory, it could be possible that a machine running the appropriate kind of computer program have mental states and a mind. Of course, that machine would have to be able to have the same capacity of rational thinking and analyzing as humans. I was wondering if it would be possible for the A1 (artificial intelligence) to be successful in thinking and reasoning like a human being if it were to pass the Turing Test, to see if the computer program is capable of producing the same intellectual tasks as human beings.


    ReplyDelete
  2. “My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. “ (page 5)

    I disagree with this, because surely part of the computation for the individual’s response is in recognizing the characters in the input. If the person can recognize the characters in the input well enough to then produce a response they must understand it. In the previous example where the individual hadn’t memorized the entire set of Chinese symbols, I could see how the individual would not be considered to understand the content, because he could look up the response in some kind of database or table or other reference. However, if he is the reference (as is described in the text cited above), then surely he has some knowledge and understanding of the system. In the following paragraphs, Searle addresses this refute to some extent but I don’t feel he gives enough credit to the hypothetical man who would have to memorize all the Chinese characters. I don’t see how Searle’s argument can stand as proof that the man is still unable to understand the material.

    ReplyDelete
  3. “We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?" (Page 8)

    Not all individuals have the same brain activity for language processing, so it is not accurate to claim that a certain profile of neural activity is what qualifies as understanding. This becomes even more complicated if we consider, for example dogs. Although they are clearly not capable of human language, they can usually be trained to recognize certain words, such as “stop” or “sit”. Likely they would also have a pattern of neural activity associated with their identification of these words, but can we then use this to suggest that they understand the true meaning of the words in English? For this reason I don’t feel it’s reasonable to use the brain activity as a unequivocal test for understanding.

    ReplyDelete
  4. Regarding Strong AI Searle says “indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter.” and that “unless you accept some form of dualism, the strong AI project hasn't got a chance”.
    Are computationalists willing to accept some form of dualism in order stand stand for their theory? And is the hardware/software distinction already some sort of dualism when it comes to computers? This also applies equally applies to the Chinese room manipulator of symbols based on rules, in which searle is the hardware and the rules of symbol manipulation is the software and yet there is not understanding of Chinese. If a computer could learn in terms of real understanding not just in terms of symbol manipulation and then could use that information to pass tt could we then claim it is capable of thinking?

    ReplyDelete
  5. Machines do not have intentionality, nor understanding. It may give the impression of understanding but it never does the way humans do. Another interesting thing is how we can misinterpret the term understanding when we refer to machines or computers. The thermostat example was a good one to make us realise that understanding is really not only defined by having the right reaction to the right situation. The only thing that machine can understand are instructions. However, there is no instruction that can create intentionality, references to the world or inferred information that is not explicitly told. This article is only reinforcing the fact that computers and programs are only doing symbol manipulation and that there is no way they can do much more than that with the actual technological advancement. Even if we could replicate a machine with all our nervous system and neurons, I do not think that we could make a machine think and have these human qualities, which are part of our mind, because we do not know where is the mind and how it really works.

    ReplyDelete
  6. If I understand correctly Searle is making an argument on how computers don’t have the power to think and she does this by using Chinese symbol analogy. Her arguments helped me understand more about the comparisons of human cognition and computation. Someone can say the symbols of the Chinese language and speak Chinese but do they actually understand what they’re saying? Although a computer can pass the Turing test can it understand the meaning behind it’s algorithms and programs? It’s learning a language by using software and different algorithms but it doesn’t contain internal causal powers in order to think about the meaning behind these different algorithms. It is told what to say. It made me think about if robots with artificial intelligence have the power to think but I think that it’s programmed to respond to different things and this is man made. I liked her argument on dualism about the separation of mind and body which I think was right to say in comparing a program and the mind of a robot.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...