Saturday, January 6, 2018

(2b. Comment Overflow) (50+)

10 comments:

  1. As mentioned by Harnad, at no time does Turing formally define thinking. This definition is rightfully excluded due to limitations of knowledge concerning how thinkers can think and the other-minds problem, namely that, as a consequence of the unobservable nature of thinking, the mental states of others remain inaccessible to us. Nonetheless, the operational definition of thinking as, “thinking does” successfully captures the focus of the Turing test, performance capacity. The ambiguity that revolves around defining thinking in terms of how we think and what it feels like to do so, justify the priority placed on performance capacity. In turn, the focus on performance capacity seems to rationalize Turing’s presentation of the original problem in its new form, as the Imitation Game. Despite the misleading connotations associated with “imitation” and “game” that Harnad justly acknowledges, this framing has the advantage of effectively focusing considerations of Turing’s question solely on performance capacity. The fact that Turing presents the possibility of the game being played by either a man or machine straightforwardly translates as a machine being able to do what the human player would, thus capturing the focus on performance.
    Nonetheless, this posing of the problem as the imitation game has significant limitations that have been identified by Harnad. This framework fails to capture the broadness of human cognitive performance capacities by limiting it to what are essentially verbal abilities. Accordingly, the machine candidate within this framework would solely have to pass T2, without accounting for the real robotic sensorimotor performance capacity requirements that make up the intended level of the Turing test, T3.

    ReplyDelete
  2. Harnad’s reading adds great clarity and insight to the ideas introduced in Turing’s paper. I focussed on Harnad’s commentary on ‘The Argument from Consciousness’.

    “According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking... [This] is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult” (Turing).

    Firstly, Harnad states that Turing is wrong by calling this solipsism (the belief that only I exist and everything else is a dream). Is Turing indeed wrong though? If only I can be sure I think, then can’t only I be sure I truly exist? And hence all else could be my dream?

    Moreover, Harnad explains that the argument from consciousness is rather just the ‘other-minds problem’ – that we cannot know anyone else has a mind but ourselves – and that this other-minds problem is irrelevant to the Turing test, as if we don’t worry about the minds of other humans since they behave just like us, then if a machine behaves like us we shouldn’t worry about its mind either. I agree that if a machine behaves exactly like us we shouldn’t worry about whether it has a mind, but I think that the question rather lies in whether a machine does behave exactly like us in ALL capacities (i.e. emotions included). We must determine this before we can stop worrying about whether it has a mind…

    ReplyDelete
  3. After reading Professor Harnard's comments on Turings paper, specifically his argument for a T3 turing test vs a T2 as turing had wanted it, was excellent and I couldn't agree more. Both turing and the professor were correct to leave T4 and T5 because those two are unnecessary scaling up, if you can achieve the intended goal with T3. The argument the professor brings up in which he states, " Would it not be a dead give-away if one's email T2 pen-pal proved incapable of ever commenting on the analog family photos we kept inserting with our text? (If he can process the images, he's not just a computer but at least a computer plus A/D peripheral sensors, already violating Turing's arbitrary restriction to computers alone). Or if one's pen-pal were totally ignorant of contemporaneous real-world events, apart from those we describe in our letters?" There is validity in the claim that to understand cognition, the robot must be able to do what thinking does, and that must encompass more than just verbal performance capacity.



    Indeed another great point that came up in the paper is a the question of free will? Firstly, I believe the idea of free will, specifically the desire/pursuit for free will is a uniquely human trait and must be part of the human experience that a robot of T3 would be able to express indistinguishably from a human, however I don't quite believe that this is possible, even if we understand the mechanisms of how we do what we do when we think, to instill the desire or feeling aspect of free will is perhaps a challenge to large. I understand that with free will comes the issue of the hard problem, but humans can also think about free will from a logical point of view, and be able to debate the pros and cons of free will and whether we have it at all.

    ReplyDelete
  4. “There is no one else we can know has a mind but our own private selves, yet we are not worried about the minds of our fellow-human beings, because they behave just like us and we know how to mind-read their behavior. By the same token, we have no more or less reason to worry about the minds of anything else that behaves just like us… The mental states (feelings), on the other hand, are moot, because of the other-minds problem”
    I had written my ramble to 2a before having read 2b, it now abundantly clear to me I had for some reasons been thinking we were trying to discuss the hard problem all along. This is essentially saying that the hard problem does not matter in terms of answering the ‘easy’ problem. I think I keep going back to our first lecture, where to me it sounded as if we were arguing that there cannot be a TT passing machine without it also being able to feel – but now I’m confused between whether we were saying that or just that they would need sensory motor abilities as well? (Could we have this without feeling…?) Anyway – good to know to just suspend my thinking of the hard problem at this point.

    Second thought, what makes solipsism and the other-minds problem so different? Is it that in the other-minds problem we don’t assume all else is not real (environment for example, other people too), whereas in solipsism we do? This is I think similar to what we argued about in class (with regards to the whole “Matrix” line of questioning).

    ReplyDelete
  5. This article really helped me grasp some subtleties of the topic. I did not know that there is a distinction between Artificial intelligence and Cognitive modeling, but I now clearly understand the difference. From what I understood, AI could be compared to cloning in the sense that both want to generate “a useful performance tool”, and the way that this tool work is not as important as what it can do. However, I think that we know much more about how AI works than how a clone works.
    I am still confused about the physical form that the machine can take to be considered a CM, since Turing seems to contradict himself about this. I also did not understand the part about pen-pal (p.9). I did not get what this means and what it is referring to.
    About the question “Can machines think?”, I also believe that it is not relevant here for several reasons. First, I think this is a matter of artificial intelligence. Second, we will never know what the machine is feeling, or if it is really feeling or only simulating it. Thus, I think this question quill never be answered fully, unless we change the definition of thinking and we do not take into account the feeling of thinking.

    ReplyDelete
  6. “But empirical science is not just about a good showing: An experiment must not just fool most of the experimentalists most of the time! If the performance-capacity of the machine must be indistinguishable from that of the human being, it must be totally indistinguishable, not just indististinguishable more often than not.“ (page 10)

    From what I can tell, it’s not that Turing did not accommodate for this, it’s instead that he was simply setting the bar lower, because of how ambitious it seemed given the machines that were in use at the time. Is it ‘Steven Says’ that passing the Turing test must be like the truths of mathematics; true upon contradiction? Does this mean that if a machine passed T2 99% of the time, Turing would have deemed it as passing T2, but Professor Harnad would not deem it as passing T2?

    ReplyDelete
  7. “We know now that what he will go on to consider is not whether or not machines can think, but whether or not machines can do what thinkers like us can do”

    This passage alone helped clarify Turing’s intentions. It is not the same to can machine can think as to say a machine can do what we do, because yes, many machines can do many things that we can do, perhaps even better, can one machine do everything that humans collectively do?

    “This is fine, for thinking (cognition, intelligence) cannot be defined in advance of knowing how thinking systems do it, and we don't yet know how.”

    This was also one aspect that bothered me about Turing’s paper. How can be conceal or grant that machines can think when we cannot with absolute universal accuracy say what thinking is and how do people think.

    ReplyDelete
  8. This paper helped me to differentiate a thinking machine and an intelligent machine. Of course it is possible to create a machine with artificial intelligence that has the capacity of computation and have the intelligence of a human. Our portable computers are an example of intelligent machines able to do computations, but the question of what is intelligence still remains. As for a thinking machine as mentioned in the article it would have to pass several tests in order to be considered a thinking machine. We also would have to answer questions of what thinking is and how do we think in order to be able to create a thinking machine. But could we create a thinking machine if we can’t create an internal state?

    ReplyDelete
  9.  
    "Now we ask: Do the successful candidates really feel, as we do when we think? This question is not meaningless, it is merely unanswerable -- in any other way than by being the candidate. It is the familar old other-minds problem (Harnad 1991)."
     
    This explanation makes a lot of sense to me. When dealing with the concept of creating a machine that can behave exactly like a human, it is easy to get carried away to assuming that in order for it to behave like a human, it must take on other humanistic traits that actually do not make a difference in terms of the behavior. In other words, whether the candidate feels like we do, does not actually make a difference in terms of whether or not that candidate may still be able to behave like we do. The other-minds problem is extremely interesting, but it is evident that in the situation of the Turing machine it does not answer the question that we are trying to answer!

    ReplyDelete
  10. I agree that T2’s verbal performance would break down if someone examined it too closely. The further you go along, I have every faith that a human would be able to differentiate a robot from another human. Robots are not able to “read between the lines” in the same way that humans can, and can never be a truly reflective product of the society we currently live in. Because of this, I believe T3 (full behavioral sensorimotor equivalence) is the proper level of Turing Test needed. If I understand T3 correctly, I feel as though this level is the only one that could ever be truly indistinguishable throughout all of time. Though it may (or may not) be able to feel, if it could behave in every way as a human could, it could simulate feelings (say ouch when you kick it or cry when a puppy dies). However, then I don’t know what the point would be of having a T4 and T5 robot, because obviously a higher level robot with full neurological equivalence would be an even better model than one with just behavioral. However, as the other minds problem tells us, there is no real way of knowing if anyone else other than you is thinking or feeling, so maybe T3 is enough.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...