Saturday, January 6, 2018

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 



This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

56 comments:

  1. “It is not meaningless, it is merely undecidable: What we mean by "think" is, on the one hand, what thinking creatures can do and how they can do it, and, on the other hand, what it feels-like to think. What thinkers can do is captured by the TT. A theory of how they do it is provided by how our man-made machine does it. (If there are several different successful machines, it's a matter of normal inference-to-the-best-theory.) So far, nothing meaningless. Now we ask: Do the successful candidates really feel, as we do when we think? This question is not meaningless, it is merely unanswerable -- in any other way than by being the candidate. It is the familar old other-minds problem (Harnad 1991).”

    So, does “thinking” essentially get split into either the “do” category (or the “easy question”) or the “feeling” (or the “hard question”) category? In this case, essentially, the easy question has been answered. However, the feeling question, from your point of view, will never be answered because it eventually always gets brought back to the otherminds problem?

    ReplyDelete
    Replies
    1. The reason the hard problem is hard is not the other-minds problem. It's that until and unless someone can give a causal explanation of how and why we feel, rather than just do, feeling remains causally superfluous.

      Although they are related, it's important not to mix up the hard problem and the other-minds problem.

      Delete
    2. My take is that the body (including brain) is sufficient to generate feeling, any other starting point is dualism. So that means that any feeling is identical to/generates feeling. (thats the hard problem right? does there exist a causal explanation for the generation of feeling, or is it merely identical (epiphenomenon)) But feeling comes in way before the answer to the hard problem- because in cognitive science we refer to feelings all of the time. We make links to brain activity and corresponding feelings along with doing. Its not a matter of should or shouldn't because it happens whenever we correlate a reported feeling to come physical observation. This does not get us closer to understanding the relationship between feeling and the body, but it does allow us to better understand feeling in a causal context. To me it seems like sometimes, we correlate physical activity with a reported feeling, which allows some sort of link to be made.(though not the how/why) Other times we change the physical and observe a change in feeling, or vice versa report a feeling and then observe a change in the physical. This is not dualism, but rather a matter of descriptive limitations. There are some physical events that cannot be correlated to a feeling, and likewise some feelings that might not be correlated to one reported feeling. This is not a limit on technology, but rather a reflection that physical states and processes do not have a neat one to one correlation with particular feelings. So it seems natural to refer to a feeling as a shorthand for an unidentified physical process in some explanation of cause. That doesn't explain the link between that feeling and the physical process, but it implicitly assumes that some physical process is going on (non-dualism) Maybe the hard problem appears so hard because its trying to elucidate that relationship, which might not even be one of causality. There is a cause for everything that happens, but something that is, understood in two separate ways, does not take the form of cause and result. An apple is an apple, there is no causation when the relationship is one of identity. Agreeing with this, I would beg the consideration that the hard problem is not even a problem to solve, but the assumption that we must ground the rest of our work on.

      Delete
    3. Abigail, there's zero doubt (Descartes) that feelings exist and occur, hence we can talk about them; and next to no doubt that they are caused by the brain. But cogsci is about reverse-engineering them, just as it's about reverse-engineering doings. The latter is "easy"; the former is "hard." (But even with doings, we don't just talk about them, or engage in them, or fix them: we need to explain how the brain generates them.)

      Delete
  2. "The only relevant property is that it is "mechanical" -- i.e., behaves in accordance with the cause-effect laws of physics (Harnad 2003)."

    I’m not sure that the cause-effect laws of physics fully capture what it is to be mechanical. Do humans not also behave in accordance with the cause-effect laws of physics? Every effect has a cause, whether it is explicitly known or not. Perhaps it is helpful to operationally define what qualifies as a cause, because anything can be a cause. Is a random thought a cause? Is the firing of a neuron a cause? In saying that to qualify as mechanical you must abide by the cause-effect laws of physics, are you also saying that humans are mechanical in some sense?

    ReplyDelete
    Replies
    1. 1. Mechanical means causal.
      2. Yes all organisms are causal mechanisms.
      3. Organisms are physical (biological) systems with certain functional capcities, especially behavioral ones. Cognitive science is reverse engineering: determing what is the causal mechanism that generates organisms' behavioral capacities.
      4. Yes neural firing can be a cause of something else
      5. We're waiting for cognitive science (reverse-engineering) to tell us what a thought is.

      Delete
  3. "one needs to remind oneself that a universal computer is only formally universal: It can describe just about any physical system, and simulate it in symbolic code, but in doing so, it does not capture all of its properties"
    I apologize if this goes a bit beyond class content, but it is something that has been confusing me. From what I know, the Church-Turing thesis was put in an even stronger form as the Church-Turing-Deutsch thesis in 1985 which is “every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means”. David Deutsch’s paper which showed that means that computation is not just a branch of mathematics, it’s more accurate to say that it is a branch of physics - computation&the laws of physics are intimately related. Wouldn’t that imply that what brains are doing (cognition?) is computation and not anything more (because there can’t be more)? So our best knowledge of physics currently would make brains functionally computers? I think this has been part of my difficulty in grasping the material presented thus far. This is not itself an explanation of feeling and thinking, but if feeling and thinking are physical processes then they are capable of being perfectly simulated.
    This quote from David Deutsch may better explain what I mean: “[…] it seems that when we’re studying quantum computers we’re studying how matter and energy behave under extremely unusual and contrived circumstances — something that may possibly be important to us for practical reasons, but of no fundamental significance. Yet we know that that is the wrong conclusion. We know it because of the existence of computational universality: the laws of physics allow for a machine — a universal quantum computer — with the property that its possible motions correspond in a suitable sense to all possible motions of all possible physical objects. Therefore the whole of physics and more — the study of all possible physical objects — is just isomorphic to the study of all programs that could run on a universal quantum computer.” (Found at http://www.daviddeutsch.org.uk/wp-content/PPQT.pdf)

    Main source - David Deutsch’s 1985 paper: https://people.eecs.berkeley.edu/~christos/classics/Deutsch_quantum_theory.pdf

    ReplyDelete
    Replies
    1. Until further notice, computation means Church/Turing computation (i.e., the weak C/T thesis): manipulating symbols based on formal, shape-based rules. The strong C/T thesis is based on C/T computation (hence on the weak C/T thesis). Quantum Mechanics is irrelevant and quantum "computation" is not C/T computation.

      I stongly suggest that you forget about QM and QC in this course: we have puzzles enough, like the "hard problem," without adding the quantum puzzles!

      That's part of why I said the strong C/T thesis applies to "almost" eveything. Leave out true continuity, infinity and QM.

      Delete
  4. In the commentary of Turing’s following statement “An interesting variant on the idea of a digital computer is a "digital computer with a random element"... Sometimes such a machine is described as having free will (though I would not use this phrase myself)” (Turing, 1950), Harnad states that this free will would be better thought of as autonomy in the world. However, without getting too much into the question of whether humans have free will or not, one must wonder what would be the motivation that drives behaviour that is now possible with free will. Understanding that performance can be bound by computational rules and physical causality, it’s an interesting thought to try and explore, what an “unfeeling” machine with free will decide to do within the limits of it’s performance, what would drive these decisions and the machines cognition if they are not to just be random and non-scripted?

    ReplyDelete
    Replies
    1. Determinism, Free Will, and Feeling

      The problem of "determinism" ("Was everything that has ever happened already causally determined in the Big Bang?") is not a cogsci question. It could be asked even in a universe without living organisms. It becomes a cogsci question when we ask "Why does it feel like I'm the cause of my finger lifting, when I voluntarily lift my finger?). But this is just another instance of the "hard problem": "How and why does anything feel like anything at all?"

      We could say that if all feeling (green, blue, hot, cold) were merely passive and decorative, and didn't feel as if it had any causal role at all, just like a passive captioning or sound-track accompanyin everything that happens, then the "hard problem" would still be there, and still be hard, but not quite as hard as it really is:

      Why? Because the hard problem -- "How and why do organisms feel rather than just function?" -- is a question about causation. We are asking for an explanation of the causal role of the biological trait of feeling. Surely it's not just a non-functional decoration!

      Well ("Stevan Says") the free-will problem comes closest to home here. What is at issue is the causal role of our feelings, as in "I did it because I felt like it!" (or at least "It felt like I did it because I felt like doing it" and not because of the Big Bang, or even a patellar reflex rap from my doctor...)

      (Take a little time to think this through: It's a pygmy thought, but not a trivial one.)

      Delete

  5. I find the argument that “it is not clear that anyone or anything has "originated" anything new since the Big Bang” hard to believe. I understand that most of what humans “create” is just new assembling of things we already know, but there must have been something that was first created that lead to the other creations and it cannot have just been the Big Bang. All new songs contain elements of older ones and it is arguable every melody has been played, and if we look into every song that inspired another we must come to a first song that lead to others, right? If not, then how was the first song created? Someone must have made something original at first, or is everything inspired from what was there after the Big Bang?

    ReplyDelete
    Replies
    1. That quip was just to remind us that we should not be too demanding about what it means to "originate" something completely new. It all originates in the Big Bang, and when it comes to individual human creativity, it's all derivative and recombinatory, but some of the recombinations are just pygmy ones and others are giant ones...

      Delete
  6. “our verbal ability may well be grounded in our nonverbal abilities”

    I think this is relevant in the discussion of the child robot towards the end of the paper because much of learning is done through interacting with our environment and i think it would therefor make it difficult for an email machine to ever be able to think fully. But Turing does mention a robot being in class with a child, so then maybe it would be interacting in the world in the same way as children do. However the argument “the fact that eyes and legs can be simulated baby a computer does not mean a computer can see or walk” raises an interesting point and makes me wonder if a robot would ever be able to interact in the world in the way that we do, and therefor whether it will ever be able to acquire the knowledge and thinking capacity that we have from our experiences.

    ReplyDelete
    Replies
    1. You can't send a simulated robot into the real world (any more than you can vacuum real-world dust with a simulated vacuum clearner). Robots have to be real robots (no matter how much computation may be going on in their heads). Again, the right one to think of is Isaure (and for the child-robot in class, Isaure's kind-sib!).

      Make sure you understand the difference between the Turing machine, the Weak C/T thesis, the strong C/T thesis, computer simulation and the Turing Test (T2 and T3).

      Delete
  7. This paper was awesome, its explanation of Turing’s (often confusing and contradictory) statements helped me metabolize Turing's key points. With that being said, the most important takeaways from this readings were not Turing’s statements, but rather two of the suggestions for what he should have said.

    First and foremost was “T3" the idea of Total indistinguishability in robotic (sensorimotor) performance capacity. I agree that this is a more rigorous way to define the Turing Test, because it brings the test to the real world, not a virtual world of simulations.

    Secondly, the added clause of the machine/computer in the Turing Test to be indistinguishable for a human indefinitely and without room for some people to be able to tell and others to not be able to tell makes this test MUCH more compelling and thought out. I always thought the requisite of fooling 50% of people for a 5 minute chat to be extremely arbitrary and not a relevant test to demonstrate thinking.

    ReplyDelete
    Replies
    1. I (a pygmy) don't believe for a minute that Turing (a giant) meant his 10-minute 70% Gallup Poll was the TT! It was just his guess as to where cognitive science would be 50 years later, in 2000:
      "The Turing Test Is Not A Gallup Poll (And It Was Not Passed)"

      Delete
    2. I also thought the idea of the computer being indistinguishable from humans intruiging. After reading this article I wonder if there is a limit to what machines can learn and achieve? In the article it states that machines may be able to surpass humans in the imitation game, if this is possible, what other ends will they be able to achieve?

      Delete
  8. The Turing Test is designed to determine whether a machine can do everything a human can do in such a way that we would not be able to tell that it is not in fact human. The test doesn’t take into account many aspects which Harnad writes are rightly excluded such as the physical appearance of it. Turing also makes a point to write that all correspondence between a person and the machine must be done via typed written messages. This has, as Harnad writes, “the unintended further effect of ruling out all direct testing of performance capacities other than verbal ones.” Though Turing meant to eliminate all elements of physical appearance and functioning (in this case voice), in the process, he created a test that would be very good for testing whether or not a machine can use verbal information effectively and in a human-like manner but severely unable to gauge performance when it receives non-verbal information. Let’s say we have a machine which passes the Turing Test as Turing described it, through email-like interactions, what would happen if it receives an image and is asked to look at something in that image? Would it still respond in a human-like manner, making the connection between verbal and visual information? Though language is a hallmark of our intellectual capacity it is not the only aspect of it and thus a machine that passes the Turing Test as it was originally designed would pass it not because it can necessarily do everything we can do but rather because it can do everything that we can test directly. The question then becomes, how would we test for non-verbal capabilities?

    ReplyDelete
    Replies
    1. The Turing Test has several versions. Turing's original version was just verbal (T2), but the robotic version (Isaure: T3) includes everything a person can do (including with facials expressions and tones of voice).

      "Stevan Says" nothing could pass T2 unless it could also pass T3 (even if you are only testing verbally)...

      Delete
    2. Hey Maria! Really enjoyed your response to the Harnad article. Though I understand the Turing Test is intended as a thought experiment more that a true test, I was struck by many of it's shortcoming as well. What came to mind for me reading your response was what would happen if we introduced an emotionally-laden text to the Turing test. How would the machine fair again the human participant if they were expected to interpret the emotional subtext of the message from the interrogator? Even within our verbal abilities, there are important subtleties such as emotion and nuance that the Turing test fails to account for, but would however assist an interrogator in their quest to distinguish the computer from the human. Great response!

      Delete
  9. "Would it not be a dead give-away if one's email T2 pen-pal proved incapable of ever commenting on the analog family photos we kept inserting with our text? (If he can process the images, he's not just a computer but at least a computer plus A/D peripheral sensors, already violating Turing's arbitrary restriction to computers alone). Or if one's pen-pal were totally ignorant of contemporaneous real-world events, apart from those we describe in our letters? Could all of that really be second-guessed purely verbally in advance?"
    My question about this passage is: wouldn’t a T3 robot (Total indistinguishablity in robotic (sensorimotor) performance capacity) be equally incapable of this? When we don’t inform it of current events somehow (either by talking with it or by allowing it access to media it could process), how would it know about them? Could we not even go one step further and ask how it would react to the family photos? Of course it would be able to see them and it could maybe even comment on the physical nature of the people in the picture but it would never understand the emotional value attached to those pictures. The second someone asked it to either tell them what emotion was present in the picture or how the picture made them feel, they would exposed as a robot. Perhaps this is too much like the argument from consciousness that Turing discusses but it does raise the interesting question of where the line between non-thinking, robot and person is drawn…

    ReplyDelete
    Replies
    1. To replace armchair intuitions about what a T3 robot would or wouldn't be able to do, why not try it out with Isaure (because that's what's meant by a TT-passing robot!).

      Delete
  10. Jefferson (1949): "Not until a machine can [do X] because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain"

    I would like to comment on the argument from consciousness. I understand the problematic surrounding not knowing what drives (feelings?) humans or machines to do what they do, but how is this interrogation relevant to the Turing test? From my understanding, the primary aim of this test is to find an explanation to the easy problem which concerns doing, not feeling. Thus, the argument from consciousness certainly brings up a good point in that not until we understand the hard problem will we be able to say that machines equals brains. Nevertheless, this is an invalid argument against the Turing test since it does not concern what the test is concerned about.

    ReplyDelete
    Replies
    1. Turing wrote his paper in 1950. The Jefferson quote is 1949. So he made it before the TT was proposed. I think Turing cited it as a wrong-headed objection. It is invalid against the TT because Turing explicitly says he's only talking about what people can DO -- which one can observe and model -- and not about what they feel (the hard problem).

      Delete
  11. While I understand Turing's argument for making disabilities irrelevant in the imitation game, this overlooks the information that nonverbal cues/abilities provide as far as identification. Verbal information is evidently the most important to obtain information from, but nonverbal cues also say a lot about a person. I wonder how a program would be devised to be able to understand non-verbal information, such as visual or tactile. Don't these categories of sensation and perception also fall under what composes consciousness? That being said, the idea of a program starting out as a 'child's mind' may give way to the learning of these other facets of consciousness. The question, however, is still how.

    ReplyDelete
    Replies
    1. The robotic version of the TT -- T3 (Isaure) -- includes all behavior, verbal and nonverbal.

      Delete
  12. I accidentally erased a comment in answering it. This is my reply. Could the person who posted the comment re-post its gist ("how is the TT not a Gallup Poll?") as a reply to my comment here? (My apologies).

    I mean that when each of us is testing (lifelong, if need be) whether Isaure (or any other person we ever meet) can do anything/everything a normal person can do, we are not conducting a Gallup Poll (where the outcome is determined by a percentage vote, as in the Loebner Contest). We do it individually, constantly, and the thought never crosses our mind. (Is that what you mean?)

    ReplyDelete
  13. I really liked this paper and found the format helpful in understanding the more confusing topics from the Turing paper. From my understanding, T3 is able to perform all behaviours that a person could perform, verbally and nonverbally. It subsumes T2 as the machine is able to also explain sensorimotor behaviours, which makes it the best candidate for the Turing Test. In addition, a T3 does not have to have an “artificial flesh”, like those of T4 and T5, and T5 is essentially indistinguishable from a human, both physically and mentally.
    I was wondering how exactly the T3 and T4 Turing Tests differed? I struggling a bit to conceptualize something that could exist between T3 and T5.

    ReplyDelete
    Replies
    1. I was also confused about the nuances within the hierarchy of the Turing Tests. I believe that T3 and T4 differ in that T4's internal structure/function is indistinguishable from humans, which would be at the microlevel. But how T4 and T5 differ, I'm not exactly sure about. What I'm confused about still is why T4 and T5 even exist as ideas if the Turing Test is about mind-modeling. I know that T4 and T5 go beyond the Turing Test, but does it also go beyond the goal of cognitive modeling? If the purpose is to reverse-engineer the mind, why are T4 and T5 necessary?

      Delete
    2. Forget about T5; the T4/T5 distinction is just for philosophers.

      But T4 is cognitive neuroscience: There are at least as many people who think that the way to pass T3 and solve the easy problem is to study what's going on inside the brain as there are people who thing that that the way to pass T3 is to do computational modelling.

      Delete
  14. I want to address the quote: "The original question, "Can machines think?" I believe to be too meaningless to deserve discussion."

    I think that although this question is impossible to answer, it is not to meaningless to discuss. We cannot put ourselves into the minds of others to see the world how they see it or to experience their thoughts, and likewise we cannot do this with machines. With machines, what we can observe and qualitatively test is all we can extract as fact. Since we cannot enter the minds of the machines we will never be able to answer this question. This does not, however, make the question unworthy of discussion, and I think he misses the mark by stating this. It is not meaningless, and one day if machines are somehow able to surpass humans (as we do not know the limit to which they can learn), this question carries more and more meaning.

    ReplyDelete
    Replies
    1. Turing thinks humans, are just machines, by which he means causal mechanisms. So what he means is that when we study machines, whether human or non-human, the only evidence we have to go by is what they can do (including what they can say) -- that's T3/T2 -- and what's going on inside them (T4). "Thinking" is whatever does all that. But what we mean by "thinking" is also what it feels like to think. And that Turing sets aside (and he would have called it the "hard problem" is that were the fashionable turn at the time).

      Delete
  15. “It is necessary therefore to have some other "unemotional" channels of communication. If these are available it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language. These orders are to be transmitted through the "unemotional" channels. The use of this language will diminish greatly the number of punishments and rewards required.”

    Turing’s idea of “unemotional” channels of communication paired with reward-punishment learning seems like a paradox to me. How can this teaching process, which relies heavily on the emotional motivation systems in humans, be achieved with “unemotional” channels? Even if this can be implemented, it is certainly not the way humans learn. The ability of humans to feel emotion is something that we do. If the robot was to learn through “unemotional” channels then wouldn’t it not entirely have the full capacity to do all that humans can do, thus making it unable to pass any level of the TT? Turing’s idea of machine learning also becomes problematic when he dismisses the importance of “legs, eyes, etc”. The robot does not need to have legs or eyes, just as some humans don’t have legs or eyes. But it must be able to have some other way of experiencing the environment through sensorimotor performance in order to learn, and in order to pass T3.

    ReplyDelete
    Replies
    1. Devona, you argue that “the ability of humans to feel emotion is something that we do,” and this is central to our ability to learn. Indeed, I think this is usually true, but some people have personality disorders resulting in blunted or absent emotions and an inability to learn though emotions the same way that most other people do. Considering this, it might be interesting to investigate learning in people with these finds of personality disorders to better propose a solution to Turing’s idea of an “unemotional channel” of learning.

      You agree with Turing when you state that “the robot does not need to have legs or eyes, just as some humans don’t have legs or eyes,” as long as the machine has some other way of experiencing its environment and learning through sensorimotor capacities. I agree that legs and eyes are not necessary to pass T3, but they are necessary to pass T5. T5 is “total indistinguishability in physical structure/function” and a machine must have most of the typical physical features of humans in order to pass T5, and thus, the Turing Test as a whole.

      Delete
    2. Devona, by "unemotional" Turing just means unfelt. And if you have an explanation of why reinforcements have to be felt (rather than jst detected and acted upon), you have solved the hard problem.

      Anna, T3 may not need eyes and legs, but it needs some sensory input channels and some motor output channels to ground its symbols, doesn't it?

      (And "Stevan says" the capacity to pass T2 has to be grounded in the capacity to pass T3, otherwise the symbols have no meaning: the "symbol grounding problem," week 5)

      Delete
    3. I return to this response with a better understanding of the Turing Hierarchy and symbol grounding. Perhaps in order to pass T5, the “indistinguishability in physical structure/function” must be in terms of what allows for equivalent sensory-motor and perceptual experiences. For example, a Turing machine missing legs could be indistinguishable because the legs themselves are not necessary for sensory experience. The Turing machine would need some way to experience sensation on the skin, irrespective of where on the body the sensory input is received. Going along with this, it (generally) should have all of the other sensory experiences that humans have: vision, hearing, smell and taste. Without sensory experience, it would not be able to ground the meanings of the symbols it experiences in the world, and the machine certainly would not pass T5.

      Delete
  16. “For thinking (cognition, intelligence) cannot be defined in advance of knowing how thinking systems do it, and we don't yet know how. But we do know that we thinkers do it, whatever it is, when we think; and we know when we are doing it (by introspection). So thinking, a form of consciousness, is already ostensively defined, by just pointing to that experience we all have and know.”
    I agree with the explanation of thinking, I am just wondering how feeling is defined, is it another form of consciousness? Plus, the definition of the hard problem given in class is “explaining causally how and why the brain generates feelings”, but should thinking also be involved in the discussion of the hard problem? Or is thinking the easy problem which works similarly as rule-based computations? In addition, the paper states that “what we mean by ‘think’ is, on the one hand, what thinking creatures can do and how they can do it, and, on the other hand, what it feels-like to think.”, It might be irrelevant but I am quite confused about what it means by “feeling like to think”, does this feeling differ from emotional feelings or sensory feelings?

    ReplyDelete
  17. “T3: Total indistinguishablity in robotic (sensorimotor) performance capacity. This subsumes T2, and is (I will argue) the level of Test that Turing really intended (or should have!).”

    While I agree with the sentiment that the machine would have to have at least similar sensorimotor performance capacity to be judged as “thinking” in the sense of doing what a thinking being can do, I feel like the wording of this test makes it possible to discount a thinking being. For example, If our definition of thinking was based off of what a normally sighted person does, and we were to apply the test to a colorblind person, the wording of T3 would lead us to the conclusion that the colorblind person was not capable of thinking, as it would be possible to distinguish them from the person that we have already claimed to be a thinking being. Similarly, someone who was able to see more colors than normal would also not be totally indistinguishable, so they too would fail.

    The issue then becomes what amount of sensorimotor performance capacity is necessary to be able to think, which I do not have a satisfactory answer for.

    ReplyDelete
  18. I kept coming across the words “performance capacity” and I think its interesting to see that thinking is not associated with feeling but more with doing. Its all about the fact that computers can do what humans can do(i.e. stimulation) and not what drives them to do it (i.e. feeling and emotions). Does this concept of doing as a form of thinking, tie into the definition of consciousness? The fact that computers can do can make them pass the Turing Test. But at the end of the day do computers know that they are doing? As humans, we know we are thinking but how do we know a computer knows that it is doing and how will we ever know this?

    ReplyDelete
    Replies
    1. I think that a reason as to why we often focus on and test a machine’s actions rather than their thoughts is because this is easier to test. We are incapable of knowing if machine’s can think (in fact we have no solid evidence that proves that other people think either), but it is relatively easy to measure a machine’s competence on several given behavioral tasks. In response to your other questions, I believe that if machines were to have a consciousness similar to a human’s it would make sense that they would be able to know that they were thinking- just as we know. This however is based on the idea that a machines consciousness would be replicated in a way similar to ours, which is not something that’s guaranteed to be true.

      Delete
  19. When I was reading the Turing paper, I found it difficult to assess whether or not machines could think without having a definition of “think”. Harnad’s paper clarified this: “What we mean by ‘think’ is, on the one hand, what thinking creatures can do and how they can do it, and, on the other hand, what it feels-like to think.” Harnad goes on to explain that wondering if others feel the same things we do when we think constitutes the other-minds problem, because it is impossible to know what others feel without being them. However, I am still unsure what the difference is between the other-minds problem and solipsism. If we cannot access other thinking creatures’ thoughts and feelings, how can we be sure that other thinking creatures exist? Is it simply by observing what we can do as a result of our thoughts, noticing that others do similar things and thus deducing that they must be able to think too? I wonder if it would be possible for a machine to do the same things we do without the act of thinking as Harnad defines it.

    ReplyDelete
  20. Since Turing said "we do not wish to penalize the machine for its inability to shine in beauty competitions" and therefore the appearance of the machine in question is of no importance, what do you think he would think about IBM's Watson? He makes note of the fact that you must include "digital computers... [that can] carry out any operations which could be done by a human computer" in the test, but do the abilities of Watson surpass those of regular humans? Seeing as the knowledge possessed by that particular computer was far more extensive than that of any other human contestant on the Jeopardy game, would Watson be excluded from the Imitation Game?

    ReplyDelete
  21. “Wouldn't even its verbal performance break down if we questioned it too closely about the qualitative and practical details of sensorimotor experience?”

    This is a question I have since we ourselves have the hard problem and don’t know how qualia happens but it does. If we don’t know how it happens in us, how can we ever make computers that can do it? My computer pen-pal could totally talk to me about gardens when I share that I went to one but would it introspect and share about it’s own related experience and tell me about how it felt that one time it saw a budding rose? It could tell me it saw one, but could it tell me how it made it FEEL? Since that is something a human would share with a friend very normally. The whole point of having a pen-pal, essentially a friend, is to mutually share experiences. Sure this computer can have indistinguishable verbal capacity in the sense of having stored many words in language and knowing how to string them coherently/convincingly, but would it just react to what I was saying and not share anything of it’s own? How can it really share anything when it hasn’t had its own experiences? Would the computer just make up scenarios attributed to keywords it notices in my message? Would it only tell me a made up story regarding something when directly asked about it? Because that isn’t how natural conversation flows between humans right, we jump from one thing to another, almost wanting to talk about so many things as different experiences or interests become active in our mind, triggered by our surroundings, or a memory or so many other things. Can the computer pen-pal do all this? I think I would notice something fishy…..

    ReplyDelete
    Replies
    1. Hi Shaista, I think your comment gets to the heart of Harnad's assertion that to pass T2 the "pen-pal" must be a T3. He makes the point that in order even to play the game (T2 test) in the real world, a real robot (T3) would be required. But, also, in order to pass T2, the T2 machine must have exactly identical performance capacity to a human, both empirically (the doing) and intuitively (the way they do). As you note, likely only a machine that has real experiences could relate to you enough to not raise suspicion, and to pass the T2 test. I think we can all stand in agreement that Isaure has a life and experiences like the rest of us.

      Delete
  22. The T2 level of the Turing Test suggests that the machine in question is totally indistinguishable in email (verbal) performance capacity. The digital computer that Turing describes has the component of universality, similar to the way humans have a “universal grammar” for language from the beginning. However, I’m curious as to whether the machine supposedly capable of passing the T2 Turing Test would can capture the subtleties of language such as idioms, expressions, sarcasm, humour, and even language switching, or would it be confounded by them?

    ReplyDelete
    Replies
    1. Phoebe, I think you bring up a great point about if the T2 has enough pragmatic competence to understand the subtleties of language. For example, to be able to understand whether someone is being sarcastic or not, you would need information from two sources: the context of the situation and the prosody used in speech. In the case of the T2, it would not be able to analyze the acoustic correlates of prosody since the exchange is through email. It would only have to rely on context, making the chance of misinterpretation greater. However, even humans can make mistakes in their judgement of what someone else is trying to say when you are communicating through email. It all depends on how much information there is in the context. Then, if the T2 level machine does make a mistake or cannot accurately interpret what the human is trying to convey, it would not necessarily make the machine less “human”.

      Delete
  23. “Here is the beginning of the difference between the field of artificial intelligence (AI), whose goal is merely to generate a useful performance tool, and cognitive modeling (CM), whose goal is to explain how human cognition is generated. A device we built but without knowing how it works would suffice for AI but not for CM.”
    I think that the difference that is made between artificial intelligence and cognitive modeling is one that is often overlooked. When it comes to cognitive modeling it is much more a case of attempting to understand the human brain and replicate it, whereas artificial intelligence concerns itself with finding alternative ways of replicating consciousness within something manmade. I understand that cognitive modeling can be helpful in many future cases in scientific investigation cases, but I’m not sure of the use for AI from this definition. I’ve always seen AI as a project that people work on in order to explore the topic of consciousness as well as generally exploring how to replicate something similar to ourselves, but how is this useful to us?

    ReplyDelete
    Replies
    1. I don't think the end goal of AI is to find an alternative way to replicate consciousness, but rather, as the article mentioned, is purely to recreate or create a very powerful and useful tool. I personally feel like AI doesn't even need to recreate any sort of feelings or humanlike behaviors because that is not what's fueling the research and interest in AI. I do agree that cognitive modeling is far more interested in actually understanding the how and why underlying cognition and consciousness, and unlike AI, which like you said, cannot be useful to us, CM I personally feel is a field that actually serves to attempt and solve the hard problem

      Delete
  24. “It would have had that advantage, if the line had only been drawn between appearance and performance, or between structure and function. But if the line is instead between verbal and nonverbal performance capacities then it is a very arbitrary line indeed, and a very hard one to defend.” (Harnad, p.4)

    I actually disagree that Turing didn’t think this through. I think he truly sees the sensorimotor capacities as window dressing, and I tend to agree. I can’t think of a non-verbal display that would be more of a giveaway than a verbal one if performed incorrectly. If you can build a machine that convincingly communicates with you via text, to the point that you can’t distinguish it from a human, it seems to me like all the non-verbal behaviours that accompany that are just limited by the current capacities of engineering. I thought we weren’t really interested in that sort of thing.

    For example, I think of facial expressions as displays of emotion. If you have a machine that can pass T2, then the machine is capable of verbally displaying emotions at the appropriate moment. So the machine “knows” (maybe not in a self-aware sense) when it’s time to act sad, happy, surprised, etc. in order to be convincing. From there, you just need a very capable engineer to produce something that looks like a face that can take on a big enough number of conformations that it convincingly reproduces human expression visually. That to me seems akin to dressing a machine up in “artificial flesh” as Turing says. Although it’d be very impressive to create such a replication of the human face, the hard part is already done if the machine knows when and how to act emotionally.

    I just can’t think of a T3 task or behaviour that wasn’t fundamentally more difficult to achieve at the level of T2, and that wasn’t ultimately tied to the cosmetic limits of engineering, as opposed to something else (i.e. our ability to create a machine that thinks). I’d love to hear an example. Why is it that “Stevan says” that a machine can’t pass T2 unless it can also pass T3?

    ReplyDelete
    Replies
    1. We'll be discussing the symbol grounding problem in week 5, but since I have no secrets, the short answer that it requires T3 sensorimotor capacity to the things they mean in the world. T2 connections are only symbol to symbol. And the TT is not about fooling. So you might be able to create a super-Siri T2 chatbot, with enough symbols to fool people for a while. But it could never talk about anything that was not anticipated by those (meaningless) symbols, so eventually the trick would become obvious. (And besides, talking is not the only thing that humans can do!)

      Delete
  25. "1. Do the two candidates have identical performance capacity? 2. Is there any way we can distinguish them, based only on their performance capacity, so as to be able to detect that one is a thinking human being and the other is just a machine?"

    I really appreciated the way that this response addressed the scale of performance capacity that Turing maintains as a goal throughout his paper. While the T2 email correspondence model may have its own research value (particularly regarding language learning) - Harnad's insistence that "the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime" is at the heart of my issues with the literature surrounding the Turing test.

    The sensorimotor interaction of any individual with the outside world, and the context that is given through this interaction (regarding current affairs or the mood of a city or the political climate of a particular party) seems highly important for the nature of our individual cognition.

    ReplyDelete
  26. “Doing is performance capacity, empirically observable.” Thinking/cognition is an internal state: associated with neural activity and introspectively observable mental states. The Turing Test is behavioural in the sense that we could achieve a T3 causal mechanism if we can make it DO everything we can do, and our aim is to explain how and why we do what we do. The hard problem is explaining how and why we FEEL. Are feelings just a byproduct of our brain states, or is the mind somehow not material, violating known ideas of conservation of energy while still causally affecting the universe? Harnad states in his paper “No Easy Way Out” that he doesn’t believe the hard problem is solvable; not because we’re incapable of comprehending the answer if confronted with it, but because of the difficulty of explaining feelings functionally. Functional questions refer to seeking how-and why objective explanations. How do you explain how/why we feel, that doesn’t resort to the answer that we just DO? This might sound naive but I can’t shake this deeply held belief of mine that everything in the universe has some sort of objective explanation, whether or not it’s observably accessible to us using available scientific discourse.
    “Now we ask: Do the successful candidates really feel, as we do when we think? This question is not meaningless, it is merely unanswerable -- in any other way than by being the candidate. It is the familiar old other-minds problem (Harnad 1991).”
    If thinking capacity really is independent of consciousness (sorry, feelings) then figuring out how and why we do what we do should be sufficient to explain thinking capacity. (Can something think without feeling?) But if our goal is truly to understand EVERYTHING about the human experience of existing, should we still abandon the hard problem simply because it’s unsolvable? Maybe in its pursuit, it could help us frame other questions in ways that lead to progress and discovery. From my understanding, Harnad has an epiphenomenalist regard towards feeling; like steam from a train engine, it’s the byproduct of physical states that has no causal effect on the brain (even though he acknowledges this doesn’t explain a lot of things like why it feels like you’re the one who decides to lift your hand, for example). This could negate the need for exploring the hard problem. Although we would not be able to prove it (other minds problem), feelings could probably arise in a successful implementation of a T3-passing mechanism. I doubt this would help us fully understand feelings though.

    ReplyDelete
  27. Language, without the meaning it carries is just collections of random symbols. It is equivalent to a turing machine that can make responses according to algorithm. The problem here is, in a t2 test condition, with a fully developed algorithm, a machine can fool people by showing it is intelligent, and capable of communicating via email because all it is doing is finding symbols in responses to in coming symbols. However, by doing this along does not achieve cognition, because our cognition is able to understand the meaning behind the symbols, to make connections between symbols and the features in the real world. This is where symbol grounding problem comes into play. However, categorization is far more complicated than just naming due to the always changing stimuli in our environment.

    ReplyDelete
  28. "But here we are squarely in the T2/T3 equivocation, for a simulated robot in a virtual world is neither a real robot, nor can it be given a real robotic Turing Test, in the real world. Both T2 and T3 are tests conducted in the real world. But an email interaction with a virtual robot in a virtual world would be T2, not T3."

    So if I'm understanding this correctly, T2 can exist in both reality and a virtual (simulated) world, but T3 is grounded in reality?

    ReplyDelete
  29. This article was enlightening and helped the understanding of the Turing concepts a lot with the added nuances to the theory by Harnad. I agree with Harnad’s proposition that T3 is the level of test that “Turing really intended (or should have!)” This seems like the test that truly brings the Turing test away from just being a proxy for human language, and makes it so that the machine must essentially be a full robot, that can pretty much do and recreate all that humans can recreate, not just the verbal capabilities. As mentioned, T4 and T5 are not very necessary because we don’t need to attempt to exactly recreate a human with realistic skin and neural system in order for the TT to be completed. Additionally, I found the idea of our “verbal abilities” being grounded in our “non-verbal abilities” very interesting. This further proves that T3 is the ideal test because it would have to program for a lot more than just creating human-like verbal compositions. Harnad also argues that the machine need not be a computer specifically, but any machine.

    I vehemently agree with Harnad’s point that the TT cannot be reduced to the likeness of a Gallup poll, with the expectation of statistical significance. It has to be a test that truly recreates human performance 10/10 times regardless of observer.

    ReplyDelete
  30. An interesting variant on the idea of a digital computer is a "digital computer with a random element"... Sometimes such a machine is described as having free will (though I would not use this phrase myself)
    I found this passage interesting because I find that this is what AI researchers are worried about with the development of AI. Currently, the most intelligent machines are those that are module specific. In that sense, a self driving car is only good at driving or AlphaGo is only really good at Go. These AI machines are experts in only one capacity, and would be useless if they were asked to do something else. But what Turing said about "a digital computer with a random element" is the equivalent to today's AI research which focuses on the idea of a program that is able to be expert in different capacities, a sort of C3PO, that can both have sentience and sapience (T3).
    Although we are far from this and should probably understand cognitive modeling first, I find this is an interesting aspect of AI research.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...