Saturday, January 6, 2018

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"




1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: BBC Horizon Documentary and 
3Le modèle Turing (vidéo, langue française)

51 comments:

  1. I think that by using the imitation game to prove/disprove that computers can or cannot think is not a full explanation. For example, if you take a computer being better than a human at this game to mean that they are thinking, then you are not putting a limit on their capabilities. Just because they are able to manipulate their actions in order to win a game, does not mean they are able to think in the same way humans are. We are able to hypothesize about future events, manipulate objects and numbers in our minds eye, and many other phenomenon that are not explained by the computers ability to play the imitation game. If you do not put some sort of limit on them, then what else will they be able to achieve and when will they surpass humans in these realms? If you don't extrapolate these findings without cause, these questions remain ones of the future.

    ReplyDelete
    Replies
    1. My question relates to Rachael's in that I think there's a jump of logic in the Turing Test, from a machine passing it, to that machine doing anything and everything a human can do. What if it is just a failure of humans to distinguish between the machine and humans? Does the Turing Test account for a threshold of mimicry that would make machines indistinguishable rather than asserting the machine can do ANYTHING and EVERYTHING a human can do?

      Maybe I am missing components of the Turing Test also?

      Delete
    2. That's why we have Isaure, our MIT T3 robot: How does your question apply to her? (And it would have been the same if we were communicating with Isaure only via email -- for years, and about anything you can discuss in email.)

      Delete
  2. Turing states that growing a " complete individual from a single cell of skin of a man” would not entail the construction of a thinking machine. Therefore, when we think of thinking machines we should refer to digital computers. In that vein Turing espouses the idea that we should use digital computers in the imitation game. Is Turing’s objection due to the lack of knowledge on how this cell would grow specifically, therefore not allowing for us to understand how the thinking machine is constructed? I wonder then, if his opinion change would change if we were able to theoretically reconstruct the human brain nucleotide by nucleotide and understand the function that each of these building blocks has in brain functioning. Then hook up our completely constructed brain to an output and allow it to take part in the imitation game. Basically, would Turing allow a zombie with executive unit, store and control (as his digital computers do) to try and play the imitation game.

    ReplyDelete
    Replies
    1. 1. Cloning or growing a TT robot would not explain how it works. (Neither would cloning a car or plane.)

      2. A digital computer alone could only (in principle) pass the verbal TT: T2.

      3. Building the brain piece by piece only explains if we know how it generates our capacities. (We already know that it generates our capacities.)

      4. Turing's Test is for whatever can pass the TT, whether T2 (verbal only) or T3 (robotic: Isaure). (Which algorithms, if computational, which dynamic processes, if not computational.)

      5. Reminder: (almost) anything is simulable by computation but that does not mean that (almost) everything is computation.

      6. Computer simulated flying is not flying (we can see that); computer simulated thinking need not be thinking (although we can't see that, because of the other-minds problem).

      7. A computer simulated plane can't fly, even if it's hooked up to wings and wheels.

      Delete
  3. “The criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity. Until fairly recently a storage capacity of even a thousand digits was very rare.” Turing’s comparison that diversity of behavior is just a way saying that it cannot have much storage is a false comparison isn’t it? If Turing meant that they are the same *kind* of thing in machines, I mean. Diversity of behavior would be a property of the machines software (i.e. it has algorithms/procedures for doing different things) but storage is just a property of hardware (i.e. memory). So it isn’t the same comparison and diversity of behavior of behavior might actually be a more legitimate criticism than storage capacity. The problem of diverse behaviours is much harder because it's a different problem entirely. Storage space and processing speed (hardware problems) have increased quite rapidly (exponentially I think?) but getting a machine to do a wide range of things (software problems) like people can... there has not been much relevant progress in that and probably because they are different *kinds* of problems. That said, I agree with Turing that in principle a machine could have both (diverse behavior and more storage). I also think that, like Turing points out himself, that the argument from consciousness&various disabilities argument&Lady Lovelace’s objection are all really just the same kind of argument: a machine has a limit to what it can do (namely, that it can’t be creative in the way that thinkers are and do all the corresponding unique behaviours that come from that creative property of thinkers). I think those arguments deny the possibilities of computation – that lower level (simpler) computations can’t have some emergent (more complex) features resulting from them like creativity. How this happens is something else, but it doesn’t seem like there’s any proof (mathematical or otherwise) for why machines couldn’t (i.e. no algorithm in principle that can’t do those things).

    ReplyDelete
    Replies
    1. There's no proof that computation can do what cognition can do. (Passing the TT with computation alone would be the proof.) Whether there is proof that computation alone cannot do what cognition can do will be discussed later in this course.

      Delete
  4. Should the Turing test be re-adapted to today’s technology? Turing starts off by proposing to address the question “can machines think”. How is his test helping us answer this question? If A and B are only communicating to the interrogator (C) through a teleprinter, I don’t see how it would be possible to really differentiate a “modern” computer from a human indeed. Hearing Isaure answer a question in real time is much more convincing than reading a typed answer on a screen or a paper. Take this situation for example. You participate in a study where you have to chat with another participant for a while. At the end of the experiment, you are told that it was actually a computer you were talking to. I don’t think anyone today would be so stunned by this information. Thus, now that technology is able to produce speech with a human-like voice (through our smartphones, for example) would it be sensible to make the Turing test auditory instead of simply an exchange of typed-answers?

    ReplyDelete
    Replies
    1. 1. None of the technological changes since Turing have changed what computation is.

      2. Read 2b to compare the robotic version of the TT (T3: Isaure) with the original verbal one (T2)

      3. A chatbot fooling you for a while is not the TT, not even T2, which should be able to do it for a lifetime (like emailing with Isaure for years, about anything), not just about questions and answers. Don't confuse Isaure with Siri...

      4. There's much more to the robotic version of the TT (T3) than seeing and hearing the robot: The robot's words have to be grounded in the things in the world they refer to. (Isaure can do more than talk: she can also show you what she means.)

      (And the TT is not about fooling or imitating at all. Nor is is a game. The game analogy was just used to introduce the idea that to test whether we have successfully reverse-engineered our cognitive capacity is to build a mechanism that has that capacity. -- Whether Turing was computationalist (who believed computation alone could do it all) is another question. "Stevan [a pygmy] says Turing [a giant] was not a computationalist. What do you think (and why)?)

      Delete
  5. Building off of Emmanuelle’s response, if we were to adapt the imitation game/Turing machine to today’s technology, I do not think that the electronic voices of Siri or Alexa would work very well. Although human-like, these voices are still very much distinguishable from a true human voice. Perhaps a way that this could work is if a human sat next to the Turing machine and read off what it was saying to the interrogator in response to their question. Or, the computer could be programmed with responses that were pre-recorded by a human, in which case it would scan through its store and pick out an answer that matches the interrogator’s question. However, both of these situations render the use of voicing rather useless, don’t they? Two human voices are equivalent to two written responses. In either case, it would be difficult to distinguish the computer from the human. Even with today’s technology I do not believe that the use of voicing would be a necessary addition to the Turing Test.

    ReplyDelete
    Replies
    1. Yeah, I definitely agree with you. I think the purpose of inhibiting the use of voice in the Turing Test is to isolate variables so we can really decide if the mechanism (resembling thought) is indistinguishable from humans. Human voices have intonation that I don't see a machine can achieve. Like you said, the voices of Siri or Alexa doesn't resemble human voices very well.

      Delete
    2. Ariana, I think you are right in saying that Siri and Alexa would probably not pass the test if it was adapted to today's technology. At the moment, these technologies are clearly distinguishable from human voices. What I was trying to express is how I have a hard time understanding the validity of the Turing test. You are right that maybe adding an auditory component to the test would not be the best idea, but in my opinion there is still something missing. Most machines in 1950 probably wouldn't have passed the test. Today however, my own cellphone could pass this test and I think this is not representative of the underlying construct it is trying to test.

      Delete
    3. Ariana, it still sounds as if you think the purpose of the TT is to trick people: It's not. It's to test whether you have successfull reverse-engineered human cognition. You really want to generate the full capacity of Isaure, not to fool people, but to make sure you really got it all.

      Amber, people's voices have intonation: Why do you think Isaure's couldn't.

      Emmanuelle, no candidate can pass the TT today, not even close, neither T2 nor T3:
      "The Turing Test Is Not A Gallup Poll (And It Was Not Passed)"

      Delete
    4. No candidate has passed the machine back in the 1950s when Turing wrote his paper, and no candidate can pass the TT today. Do you think we will ever have a candidate that does, based on the definition that it must be able to pass the TT for a lifetime? This depends heavily on what 'lifetime' means in this context - is it the lifetime of the robot, or an average human, or of the whole human race? I understand that the TT is a methodology for seeing if we can reverse-engineer human cognition. However, depending on the scope of 'lifetime', isn't is possible that someday we will (or already have) successfully reverse-engineered human cognition, without being able to say with certainty that we have, because the candidate technically hasn't passed the TT for a whole 'lifetime' yet?

      Delete
    5. I'm not sure if T3 would use the same mechanism, but from my knowledge Siri and Alexa use text-to-speech software by parsing words and phrases together from prerecorded files of one specific voice. So they're not producing spontaneous speech, and therefore don't have prosody because pitch doesn't have any correspondence to meaning for them. I was hypothesizing that maybe Turing made the test nonverbal so it wasn't a dead give who the machine was. Maybe that's a product of the time (1950), but even today we don't have universal AI programs that can produce speech like people. I think the movie Her really skewed perceptions of that existing, because that program was able to sound human, but in fact, it was actually a human who voiced that AI program. I've heard about a Montreal-based start-up who was working on creating an AI system that can learn to mimic a person's voice by analyzing speech recordings and transcripts and the relationship between them. In doing so, they can actually produce intonations. According to this company, their speech synthesis' is produced by artificial neural networks. This was introduced in May 2017.

      Delete

  6. “These have instructions involving the throwing of a die or some equivalent electronic process; one such instruction might for instance be, "Throw the die and put the-resulting number into store 1000." Sometimes such a machine is described as having free will”

    Free will is defined as the ability to act without constraint. My question is how, if at all, could this be interpreted as free will in any capacity? If you are playing with probability or randomization, then there is no free will. The only decision that the computer can make is whether to roll the dice, and if the choice is made to roll the dice, then they can choose what to do with that outcome. What it cannot do is choose the outcome itself. Maybe this is what would be the only distinguishing factor between the computer and the human. A human has a choice whether or not to follow the rules of the imitation game. If a human decides to start giving nonsense answers, perhaps out of boredom or rebellion, then it would be obvious who the human is. The computer is always constrained to its programming.

    ReplyDelete
    Replies
    1. First of all, I agree that the ability to randomly determine an outcome does not constitute free will and your reasoning for why this is the case is perfectly correct. In the paper however, Turing is not suggesting that he understands this as free will. Instead, he is stating that this is simply a function that a computer can perform in addition to discrete-state operations. I think that he mentions the random element as a way to start making headway into his description of what some people call a Universal Turing Machine. UTMs are akin to Isaure (the robot) and as professor Harnad put it, can be described in the following way: “It is about creating a causal mechanism that can do anything and everything a human being can do or say”. Turing is simply stating that for a UTM to be possible, we must consider the fact that random (or at least seemingly) random action is possible from a computer in the same way it is possible from a human.

      Delete
    2. Ariana, does the fact that it feels as if you are freely choosing what you do mean that you are really freely choosing? That just brings us back to the "hard problem" of the causal role of feeling.

      You are right that this has nothing to do with randomness, but it may have a little to do with predictability. Imagine what you would think (and feel) if it were possible to put an EEG on your head and ask that once a signal sounds, you should quickly say any number between 1 and 10.

      After the "chose" signal, but before you say it, the EEG is used to predict and print which number you will say. It always predicts correctly before you say it, but you are only shown what it predicted after you've said the number you picked.

      Once you've verified that it always predicts your choice correctly you think: "well it can predict what I'm going to say once I've made my choice, but it can't really predict my choice.

      So the prediction is shown to you earlier and earlier after the "pick" signal, and it's always right.

      If something like that were possible, you still wouldn't lose your feeling that you were doing the picking, but would you still believe it?

      Zach, I agree that Turing's reason for throwing in a non-algorithmic event was to be able to generate the flexible and less predictable features of some of our cognitive capacities.

      Delete
  7. If we do succeed in creating a machine that can learn on the basis that it changes its rules every time it encounters an imperative to do so, then I still don’t believe it would be a realistic project. Even if we suppose that the machine would have a better memory and make less mistakes which would shorten the amount of effort and time to teach it something, it would still take a lot of time to introduce it to everything a child learns. Children are incredible in the fact that they can learn grammatical rules by generalization and so don’t need to hear every past tense of every verb to somehow know the general rule. Maybe a machine could learn the general rule and then apply it, which would also take little time, but it seems that children need less input in order to learn things. They can learn words by just hearing them once or twice, sometimes not even when those words are directed at them. If we don’t use a system of reward and punishment to teach a machine, then what system could we use that would maximize the machine’s intake of possible things to learn the same way a child takes in the world around them and tries to make sense of it?

    ReplyDelete
    Replies
    1. You are forgetting that a machine could sift through thousands and thousands of texts and radio recordings in a small period of time in order to make generalizations. So even if a machine requires more input, we wouldn't necessarily notice because the time it would take would be so much shorter. Machines do seem to be good at generalizing rules. I'm not sure the problem for a machine is time, it is more subtle problems where there might not even been rule guiding our judgment. A machine can be good a generalizing a rule, but what if there isn't a rule to generalize. Or what if the rule requires knowing what it is like to be human. That would be an interesting thing for a machine to model. An interesting problem that they still haven't solved is how disambiguate pronouns, but I wonder if thats just a mater of progressing our technology.

      Delete
  8. “May not machines carry out something which ought to be describes as thinking but which is very different from what a man does?”

    This relates to the argument from consciousness about only knowing someone can think because it feels itself thinking—"only way to know a man thinks is to be that particular man"
    Further this points to why machines may not think exactly as humans do but this does not mean they are not thinking none-the-less, unless you cannot disentangle feelings from thinking.

    Arguments from various disabilities is probably the one Imost agree with but this makes me wonder whether the machine that does the imitation game is thinking or is simply programmed for certain answers. Like the response about making mistakes says that the machine would be built so that it makes a mistake every certain arithmetic answers to be convincing, but then is this machine thinking or is it just processing the data given to it based on the

    However my biggest hesitation in this question is not whether it is thinking because indeed by the definition given the machine will fulfill thinking but what I really question is whether it is human-thinking that the machine is doing. I do not disagree that the machine can mimic the human thinking and fool the investigator but I think human thinking cannot be disentangled from human feeling and therefor i dont think the machine can be doing human thinking. Now i dont think that the power of the machine is any less because of this, rather i believe the process it to be doing, different than the one we do.

    ReplyDelete
  9. Turing comments that digital computers fall within the class of discrete-state machines. This means that even if a particular machine has billions and billions of states they can only approximate a continuous machine such as the human nervous system whose amount of states is an asymptote that no finite machine can reach. I agree with Turing’s refutation of the argument from the continuity of the nervous system in that with a discrete machine with so many states it will be very hard to appreciate the difference between the two different systems albeit present which is exactly what the Turing Test is designed to do. The fact that a machine manages to pass the Turing Test is based on its ability to do and say everything that a human can regardless of its characteristics. In essence it doesn’t matter what it uses in order to do and say what it can say, just that it is able to. This however doesn’t mean that in the process of creating this discrete system we are not losing something from a continuous system such as the nervous system just that this difference is not noticeable and thus for the purposes of the Turing Test, it is just not relevant. This is a limitation of the Turing Test, it can be used to tell that a machine possess human-like abilities but is not precise enough to account for what might be lacking given the transfer from continuous to discrete systems.

    ReplyDelete
  10. If we define the question "can machines think?" as: "can a machine pass the Turing Test?", then I do not believe that machines can think. In the turing test, the machine would take the part of "A", whose goal would be to convince the interrogator that “B” (who is actually a human) is the machine and "A" (the machine) is the human.

    The reason I don’t think the machine would be able to convince the interrogator that it’s human is because there are infinitely many questions the interrogator could ask, (much more than could be possibly programmed for the machine to respond to) which would give it away if it responds in a manner that doesn't make sense.

    This is very similar to Lady Lovelace’s objection, which I don’t think Turing did a good job at disproving. Turing took this objection and turned it into “a machine can never take us by surprise", which I don’t think is equal to the original objection as described above. I think he’s wrong in equating surprise with being able to answer any question asked in an appropriate manner.

    ReplyDelete
    Replies
    1. Upon rereading I think that my argument is more closely related to the argument from informality of behavior that Lady Lovelace's objection, and that it was I who misunderstood her objection.

      Delete
  11. “We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines”

    In this section, Turing is finding flaws in the Mathematical Objection to machines being able to pass the Imitation Game. In doing this, he says that human intellect itself may have its limitations and that we are also likely to give wrong answers to particular questions, despite being humans ourselves. It seemed strange to me that machines have to answer all questions that humans could or could not possibly answer in order for us to be fooled into thinking that they are human as well. Is there another possible way that we could answer the “can machines think?” question, without necessarily asking the “can machines think as humans” question?

    ReplyDelete
    Replies
    1. Perhaps a better way to formulate the “can machines think?” question is to ask if machines can answer questions that do not have a single correct answer, or even a correct answer at all. These kinds of questions are critical or opinion-based questions. Sometimes a human may not answer a critical question due to lack of knowledge about the subject in question, but like a human, a thinking machine should be able to produce at least one answer to a critical questions when given sufficient information.

      I find it worth mentioning that Godel’s theorem, which states that "in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent” seems to imply that human logical systems are inconsistent. Indeed, this is almost certainly true; few humans have perfectly aligned systems of belief and experience without any contradicting values. It may be the inconsistency of the human mind’s logical system that allows it to prove or disprove a statement and give an opinion.

      Delete
  12. "...the only way by which one could be sure that machine thinks is to be the machine and feel oneself thinking," (Turing, 1950).

    This argument really stuck with me because it brings in both the problems of consciousness into discussion. It seems like this discussion is somewhat circular; if we were able to program a machine to embody what we consider to be consciousness, does that mean that we have solved both the easy and hard problems? It would appear yes, but how would we even have a genuine understanding that we had solved it? How could we possibly know if we are not the machine itself? Perhaps the machine could tell us how it feels or thinks, but there is no way to know if the machine's sensation of feeling/thinking is comparable to ours. This ties into Lady Lovelace's objectives as well. Additionally, if the imitation game's aim is to create a machine that can do anything a human can do, I think it should be able operate under free reign of thought, something I think is necessary in order to exhibit consciousness. Otherwise, it is simply following the inputs and generating outputs that 'Human X' would do in a particular situation. The latter would still be an amazing feat, but I don't think it solves the problem of consciousness.

    ReplyDelete
  13. I feel that Turing's point addressing "arguments from various disabilities" is an important one to make, because there needs to be a clarification made between Turing's "machine," which is an abstract concept, and the machines that we come in daily contact with (I think the example used in class was a toaster).

    However, in the passage where he addresses a machine's ability (or rather, inability) to make mistakes, I feel that he makes a point that actually weakens his overall argument. He makes the point that any mechanical "faults" are actually the result of programming, whereas I believe we discussed in class that the point of Turing machines being able to show thinking activity was not that they are programmable to do so. Is this statement simply a product of the comparatively limited abilities of machines during his time? He does postulate that in the future, machines will be able to "modify [their] own programmes so as to achieve some purpose more effectively"...

    ReplyDelete
  14. I'm not entirely satisfied with Turing's rebuttal of the Argument from Informality of Behavior in "Computing Machinery and Intelligence". He frames the argument as follows:

    "If each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines."

    He goes on to say that the argument makes no sense unless we substitute “rules of conduct” for “laws of behaviour”, before going on to try and disprove the point. He argues that we can’t prove with any confidence that no behavioural laws which regulate our lives exist, so it’s possible that we are machines, or equivalent to machines.

    I don’t understand why we need to present a situation with NO behavioural laws at play. Wouldn’t it be enough to supply just one aspect of behaviour that isn’t governed purely by behavioural laws to distinguish us from machines?

    If we do something that violates our basic biological programming (i.e. fasting for months on end, pulling all-nighters, jeopardizing our own survival for the benefit of others or even just for fun, etc) just because we want to, haven’t we already shown that there are at least some behavioural laws that don’t predict everything we do?

    For the purposes of the Imitation Game, it may not matter. Turing would probably point out that a machine could, especially in the context of the Game, mimic behaviour that appears not to be defined by laws. Put another way, he might argue that “wanting” to defy what appears to be basic programming is just the result of more elegant and convoluted programming, and can possibly be explained by some kind of behavioural law (i.e. ‘defy a human to show they have free will and they will do something maladaptive’).

    ReplyDelete
    Replies
    1. I have a question concerning the following passage, The Argument from Extrasensory Perception. Turing refers to "overwhelming" statistical evidence for telepathy. What is he talking about? Was this something the scientific community bought into during Turing's time? The whole passage feels jarringly fantastical compared to the rest of the paper.

      I assume we don't need to pay too much attention to this passage for the purposes of the course, because we've established that we don't think our minds work according to some extra hidden force in the universe, yet to be discovered (which Turing claims to drive ESP). But I am curious to know where his assertion comes from.

      Delete
  15. "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain."

    This struck a chord with me as I would have never guessed to start at the bottom and work my way up as far as starting a program in its infancy and helping the program grow until it had the capability of a human adult (as Asure does) or able to convince humans that it too was a human. The only way we could use this technique is if we were trying to see if it could learn or DO things in an adaptive way. At that point we wouldn't be measuring consciousness however as creating a software that learns and adapts to new information is still considered just DOING. Google already adapted a software that can learn and adapt for itself, would this be considered as starting out in infancy and growing into an adult? Although this would be interesting to study, I feel like we constantly run into the other minds problem, as you would never be able to tell if Asure was conscious or could feel etc., even if you followed her in her infancy to adulthood. The same could be said for a human, as you wouldn't be able to measure the level of consciousness as well, even if you began in infancy (Other minds problem). Although I understand Turings point wasn't to find consciousness, this is just something that stuck out to me.

    ReplyDelete
  16. It seems to me that the Turing Machine is a necessary but not sufficient condition to answer the question "can this machine think like a human?" If it can't pass the test, then it is unlikely that it is thinking like a human. If it can pass the test, this is still not sufficient to say anything about what thinking might be going on, only that it can immediate humans very well. We cannot assume that thinking like a human is the only way to think.

    ReplyDelete
  17. “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

    I find it interesting that Turing actively distances from himself discussing consciousness, choosing instead to frame his test in terms of “thinking”. The Turing Test has been discussed in a few of the other classes that I have taken, but all of them have misrepresented it by claiming any machine that could pass the Turing Test would be conscious. I find Turing’s actual proposal to be much more palatable than the version given to me in those other classes. His argument seems almost tautological, If we define thinking by what thinking beings can do, and a machine is made that can do everything that the being can, then the being must be thinking by the definition that we just created.

    ReplyDelete
  18. “If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”
    I think the objection could remain strong. As the quote says, it’s certain that we would fail to pretend to be machines, but it is likely that a machine can be indistinguishable from human mind if it plays the imitation game satisfactorily. Say that the engineers could construct a machine that is able to pass the Turing Test successfully one day, but how are we supposed to know the full performance capacities of the machines? In other words, the machine could be mimicking the human computer to behave like a human by slowing down its processing speed or making errors intentionally, its full capacity could be greater than or different from the human brains.

    Lastly, in the passage of The Argument from Informality of Behaviour, I am confused about how Turing differentiates “rules of conduct” and “laws of behaviour”. My understanding is that the two notions both describe bodily (physically or consciously) responses to environmental stimuli. Turing states that “being regulated by laws of behaviour implies being some sort of machine,” what does being regulated by rules of conduct imply?

    ReplyDelete
  19. How can Turing reconcile the difference of a brain being continuous and a turing machine being discrete- stated? If one stimulate all the brain activities and all the neurons existed in the brain, does it mean that a discrete-state machine can successfully mimic the processes of a continuous machine, and hereby achieving the imitation of human behaviours?
    Meanwhile, since there are different hierarchies in turing test, does every hierarchies require semantic interpretability to pass it? It seems like even by judging from emails, which is the T1 level test, the machine needs embodied cognition to learn about symbols of the external world, which means, programming and computation would not be enough for building a machine that can pass the turing test.

    ReplyDelete
  20. I found the parallels between a digital computer and the human brain's working memory to be very interesting. The function of working memory is to maintain information online and manipulate stimuli as it comes in; it is essentially thought itself. A digital computer is defined as having three parts: a store, executive unit and a control. Similarly, in the model of working memory, there is the long term memory storage, phonological loop/visuospatial sketchpad and the central executive. The long term memory storage functions as the store in a digital computer, functioning as a memory bank. The phonological loop or visuospatial sketchpad are akin to the executive unit, all of which perform calculations and manipulations of input or stimuli. Lastly, the central executive or control component of a digital computer are similar in that they both ensure that instructions are being followed and overseeing the operations of the system at large. By applying the concept of a digital computer to the working memory of a human brain, I think the boundaries between that of a digital and human computer become blurred. This extension of definitions might help serve to solve the initial question of “Can machines think?” Since the human mind and components of a digital computer or “thinking machine” show similar systems, it could thusly follow that “thought” or the ability to think could originate from both.

    This may also serve to combat The Argument from Consciousness. This argument is derived from the limitations of a computer to feel emotions and have a unique subjective feeling of experience. If the foundation of a computer’s processing is the same as a human mind, who is to say that we cannot give them feelings? Perhaps by extending the processes of the amygdala or the orbitofrontal cortex, we could one day manage to replicate a human’s feelings into a digital computer, thusly giving it a conscious.

    ReplyDelete
  21. Regarding one of the contrary views to the main question, "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so."

    Turing describes how we as humans feel as though man has a superiority to the rest of creation. When considering machines, wouldn't this be true since men are the ones who created the machines? Although, yes, machines can compute more than man could and much faster and even fool an interrogator into thinking it was a female or male ever wouldn’t it still be true that man is more superior because it is man who created the machine in the first place? This makes man far more superior in the sense that it is he who built this machine that could “think” the same way we do.

    ReplyDelete
  22. I found it amusing that Turing considered the “argument from extrasensory perception” to be a strong argument against machines being able to think. While we know today that there is no such thing as telepathy, reading this argument made me think about the instances in which we tell people things such as “You read my mind” when they correctly guess what we were thinking. These situations usually happen with people we know well, probably because we can anticipate how they will react and what they will say based on past experiences. We are also in tune with the emotions of people we are close to and the way in which they show their emotions in their facial expressions and body language.

    Therefore, I am wondering if a machine would be able to guess people’s thoughts as well as we sometimes can. My initial thought is that they would never be in tune with people’s emotions quite like we can be because they (presumably) have no emotions themselves but still, due to machine learning, I think they could learn to guess our thoughts fairly well with experience.

    ReplyDelete
  23. I think that the use of the Imitation Game to prove whether or not a computer is, in fact, a computer has become an outdated task. As of present, we have computers that are able to pass these tests with flying colours and there are humans who fail because their use of language is too formal or structured. However "smart" our computers may be as of yet, I question whether or not they will be able to be coded with the ability to surpass human intelligence in it's ability to learn. The coding of language itself in machine programs is complex enough when you're just programming the syntax of one given language, but if you look at bilingual populations you see examples of "code mixing" where linguistic data from one or more language is included in a phrase or utterance. I doubt it would be possible to hardcode a program that would allow for code-mixing to the extent that it is seen organically, save if a computer is able to learn to do so.

    ReplyDelete
  24. I don’t think Turing addresses the argument from continuity in the nervous system with enough conviction. The continuity in the nervous system, the fact that a small error in an impulse may largely affect the size of the outgoing impulse of the next neuron, is a crucial component of what humans would consider “thinking”. The thought process can be broken down to a network of neurons, communicating through electrical impulses across the cortex and subcortical structures. Without this continuity and fluidity in communication, the brain would not function as it normally does. Turing claims that the imitation game would not be able to take advantage of the fact that the machine in question is not a continuous machine, and thus dismisses this argument. But can this machine really be considered to “think” if it cannot communicate answers without prompts? If it cannot be programmed to have a continuous stream of outputs?

    ReplyDelete
  25. "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

    This screams the Other-Mind's problem because although I agree and understand the reference to a machine replicating feeling, we do not know how or why humans do it either. Ofcourse, merely looking at whether or not the machine has this capacity as humans do, is interesting.

    ReplyDelete
  26. The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so."
    In discussing this argument Turing touches on a relatively widely held idea that machines should not be allowed or encouraged to be more intelligent in comparison to human beings. I wonder where it is that we decide to draw the line of how smart technology is able to be? In some cases we already have technology that does better than us in certain regards, for example calculators. In these cases it has always been that machines were programmed by us to perform their functions. However, when we approach the topic of artificial intelligence that possesses its own consciousness we are no longer in complete control of these machines.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.”

    I agree with Turing that the most possible way to create a machine that “thinks” like a human is to produce one with a mind like that of a child. That is, to create a machine that “is something like a notebook as one buys it from the stationer's”. This would also imply that the machine has the capacity to learn, which is stimulated by an education process.

    Turing also proposes to teach the child machines through instrumental conditioning – where events that receive a punishment signal are less likely to be repeated and events that receive a reward signal are more likely to be repeated. In order for the child machine to not feel as “sore” as it would feel after a mistake is made, the machine should have an “unemotional” channel of communication to decrease the number of punishments and rewards require. But if the goal of the imitation game is to make the difference between a human and a machine less detectable, wouldn’t this deviate away from how a human actually learns? We don’t have “unemotional” channels to dampen us from punishment if we make a mistake. Perhaps the problem is the form of teaching that the machine is receiving. If we implement all three paradigms of teaching (reinforcement, unsupervised learning, and supervised learning) to the machine, would that bring us closer to a machine that “thinks”?

    ReplyDelete
  29. In the Turing Test, or imitation game, the machine must only be able to convince the judge that it is human, and therefore thinks. Doesn't matter what the machine looks like, as long as it can convince the person that it is human then it has passed the turing test. Currently, you could say that it has not been possible. But I wonder if Google's translator could be used as communication device and pass as human? I remember having to write an essay in English 10 years ago and needing to use Google Translate for some paragraphs and being aware that it was completely incorrect. Yet now (in 2018), whenever I blank on a word (english to french or vice versa), GT does a pretty amazing job at giving me the correct answer, almost as good as asking a native speaker.

    I understand that to pass the turing test the machine must have infinite power and must go on infinitely and we are currently far away from that, but wouldn't Google Translate be a close contender?

    ReplyDelete
  30. "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain in the adult brain."

    Turing discusses a child brain as if it were a complete blank slate - easily programmable by environmental stimuli. Although the ideal of creating a learning robot that would learn and grow along the same developmental trajectory as human cognition - this seems a bit short sighted to me. While in theory it seems plausible to replicate the home and school environments, including randomized negative states etc., this goal fails to take into account any of the hardwired instincts that drive survival and learning in the first place.

    It seems like this idea of molding "intellectual character" of a program is looking for an easy way to learn about how and why we think. If we knew enough about the brain and cognition to create a learning program with such specifications, it seems likely that we would understand how we think (at least at the rudimentary level of infants).

    ReplyDelete
  31. I think Turing makes a very interesting analogy when he compares the brain to the skin of an onion. Reductionist approaches to neuroscience have continued to breakdown the inner-workings of the mind to the molecular level. But, as Turing points out, proceeding in this way do we ever come to the real mind? Or do we come to the skin with nothing in it? But, where Turing loses me is saying the latter case would be evidence that whole mind is mechanical. It is difficult to parse what he means here. Perhaps he means that by showing there is an end to the components of the mind this is evidence that there is an identifiable goal by which a machine could be constructed to replicate the mind?

    Turing also articulates a fascinating point regarding the building of a child-brain-like machine. This machine would be educated until it becomes an adult brain. Turing notes the importance of punishments and rewards for the teaching process. Though I agree that this positive and negative reinforcement is fundamental for humans, I cannot conceive how a machine could be punished without feeling. As a matter of fact, if we discuss punishment simply from a neural perspective, it would simply mean that pathways (processes, or code in the case of a machine) that do not produce an ideal result (i.e. an error signal) are less used. Perhaps I have answered my own question here.

    ReplyDelete
  32. Although Turing shares his beliefs that, “can machines think?” is not as valuable to the discussion as his revised query, he reviews several objections to the initial question (to account for the context in which they were posed). I’m particularly interested in the Lady Lovelace objection.

    “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform"

    The Lady Lovelace objection is often brought up in disputes about machine generated content, namely art; music, painting, etc. There’s this belief that machines can’t be creative (a concept closely associated with art) because they can only do what they’re programmed to do. A similar logic could apply to humans too, as it could be argued that we’re programmed by determinism, our genetics, environment, etc. to think and act the way we do. Harnad said something interesting in class in that our genes code for our capacities, not direct actions. Can this same idea be applied to programming? If a machine is programmed to do something, it has the capacity to do that thing rather than the imperative to do so. At what point does a machine stop being a tool (to achieve 3D animation, synthetic music, digital paintings, etc), and can be given agency in the work it produces? It’s been asked in numeral articles and debates whether the true painter of a machine made painting is the programmer; is the true painter of a Rembrandt, his master? Or the master’s master before that? By this thinking, we could really only attribute genius/artistry to the universe itself, or agree with Turing that “there’s nothing new under the sun”. Even creativity could be seen as recombinations and progressive iterations on what’s come before. What does a creative work really mean though, and is there objective criteria for something to be considered creative? Harnad’s paper on creativity points to views of creativity as a trait, state, or its own product. His model based on Pasteur’s dictum that “chance favours the prepared mind” aligns with Turing’s remark on the false belief that “as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it.” Could we view a machine as an ultra prepared mind? There’s no guarantee that this preparation/programming/machine learning will necessarily lead to creativity, but it’ll maximize its possibility.

    ReplyDelete
  33. It is clear that Turing is not wondering whether a machine can think, for he would have to include / describe what it feels like to thing. He is wondering if a machine can effectively compute and output answers in the most undifferentiated manner to that of a human answer. What is Thinking exactly? Surely, it encompasses more than logical reasoning and more. Can we replicate thought ? Can thought be totally dissociated from the influence of feeling and vice versa?
    What would be necessary to do so ? We cannot as to now, replicate feeling in a symbol manipulating system.

    ReplyDelete

  34. « None of these [commercially available] computers can outdo a Turing machine »

    I previously did not understand how is it and why that a Turing machine can compute more than any physical computer.

    They explain that is it because (1) the physical computer has access to only a bounded amount of memory
    (2) the physical computer’s speed of operation is limited by various real-word constraints.

    With the concept of the Universal Turing Machine, Turing was able to "draw a fairly sharp line between the physical and intellectual capacities of a man" so that the physical implementation of the system does not enter into the "imitation game" as a way to distinguish the machine from the human. It does not matter what the Turing machine is made up of, it could be made of jello. What matters is for the machine to be able to do all that we can do, in an indistinguishable manner.

    Understanding the conceptual idea of the Turing Machine helped me to see that computation is implementation-independent. A Turing Machine can be built in different ways, independently of the hardware. Therefore, an appropriately designed symbol system can be deemed "intelligent" regardless of its medium of implementation.

    ReplyDelete
  35. Turing utilizes a different approach to answering the ultimate question: can machines think? Instead, he asks whether a machine can pass the Turing test, which is to have a model that performs cognitive acts that humans do to a degree where the interrogator cannot tell whether the machine is a machine or a real human. I don’t understand how this is in fact a test of human cognition when it seems to really only be testing for human-like language, or somehow attempting to equate cognition to the creation of language. I feel like the machine acts more as a proxy of human communication rather than a true thinking human being.
    Lady Lovelace’s ideas and arguments also intrigued me – specifically the point regarding the idea that a machine can do “whatever we known how to order it to perform.” This implies that you can never really have the machine “take us by surprise.” Although this is an interesting perspective, I agree with Turing saying that it is false for several reasons. He mentions that first of, doing something surprising requires as much “mental creativity” regardless of the origin of that surprising action or event. Something that I have considered nowadays is looking at neural networks and the prominence of AI-based learning. I think we live in a time where machines tend to surprise us all the time, by composing music, deciphering images, and other things.
    The argument from consciousness is one that intrigued me the most. That a machine would need to recognize its own actions and the feelings attached to them – to know it is a machine -- is something that I definitely disagree with. We do not know the internal perceptions and feelings that many people feel or think, and furthermore, we can never be sure whether any human, machine, or any type of entity next to us is truly feeling. We can only assume based on the external actions that have been put on display.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...