Saturday, January 6, 2018

(2a. Comment Overflow) (50+)

(2a. Comment Overflow) (50+)

15 comments:

  1. I find it interesting that in the second reading, under ‘’Is Computation the Answer?’’ there is a clear emphasis that learning or memorizing things by rote, such as facts, dates, names or even arithmetic, cannot account for the underlying processes in our mind that enable us to know such things and operate in our everyday lives. There must be some underlying mechanisms that cannot be explained at a superficial level. I would personally think that the answer to cognition would be not only at the psychological or cognitive level. Although these provide a great account for the way we think, process and operate daily, there is something fundamental missing. That would be the neurological and biological dimension and aspect of the human brain. It could not be only a matter of learning by experience or memorizing because memory is plastic in a sense. For instance, every time one revisits a memory from the past, that memory becomes weaker and less representative of the ‘’truth’’ (or the original and initial memory). That is because that old memory becomes influenced by the new and current memory, rich in details. The two become combined to offer a new version of that same memory. Some things become altered, modified or deleted, rendering them less representative of the truth. That would be the process of retrieval, which activates the original aspect, which make it plastic. This means that we can change memory, and make it vulnerable to any modification. Furthermore, I would think that introspection would likewise not be an adequate nor complete answer as to how cognition operates. The mental image theory, as explained in the reading, claims that when one engages in introspection, we are aware of the images, and words, associated to that. Although there is some truth to it, introspection by itself does not explain how exactly it is that these images and names came to be. When one introspects, I would highly doubt it that the memories and images of say, a person, are just laying there in one’s mind. Rather, one must recall. But the accuracy of recall is actually quite low, as evident by flashbulb memory, eyewitness testimony. Despite one’s high confidence in accuracy, these are not the actual memories, but rather are reconstructions of them based on current experience and knowledge being entangled with the original one every time one visits it.

    ReplyDelete
  2. Finally, although not entirely relevant to my previous comments, I do think there are some similarities in the computational processes between a software and human cognition.
    This becomes especially relevant when considering short-term (working) memory. For instance, while gaging the environment, there is temporary retention of information in our memory, similar to a storage on a computer. We have selective attention that filters some information, as it has limited cognitive capacity, and retain only some aspects. Furthermore, since the information has not been consolidated (and fully integrated in long-term memory), there could be disruption in the presence of distractions. As a result of such interference by sensory input, the brain is unable to pursue nor complete the processing and transmission of information to long-term memory. This is where the analogy is made between memory being analogous to a software, where there is storage of information; however, in the event where the information has not been ‘’saved’’ before the work is done (or transferred to long-term memory due to interference), then that information will be lost. So, with respect to short-term memory, it could be viewed as being similar to a computer software, to the extent that both are able to hold a limited amount of information at a given time, and that there is active processing of that information. Furthermore, it could be argued that there is also a similarity between long-term memory and it being viewed like a software storage, in terms of consolidation and storage of memory, and information.
    For instances, as proposed by Marr, there are three levels of description to cognitive processes, suggesting that there is input and output computational processing at the cognitive level.
    However, there are distinctions such that there is a difference in the ‘’hardware’’ where the brain is far less rapid and efficient at computing thousands of incoming input.


    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. The operational definition of thinking as, “thinking does” successfully captures the focus of the Turing test, performance capacity. It also makes sense in consideration of Turing’s main focus on; what it is that mathematicians do, as mathematicians. For both phenomenon, introspective observations are of little informative value and with this, they are both subject to the other-mind problem. Since whatever it is that mathematicians are doing is in their head and thus, largely inaccessible to anyone other than the mathematician in question. Turing’s abstract machine that is essentially a model of everything a mathematician does is equivalent to a simulation of what the mathematician does, doing math in the virtual world. As noted by Harnad, the limit of machine simulation is that it fails to capture all of the properties of the real world equivalent. Although Turing correctly describes what they are doing through his formalization of computation as the manipulation of symbols based on operations, this description is subject to the same limits that arise in considering “thinking.”

    ReplyDelete
    Replies
    1. I agree with this point. Even if we were to engineer a machine to compute a given input in way similar to that of an adult human and express a response that reflects this "human" computation we have not made strides in understanding what separates the conscious from the non-conscious. Even in the example Turing presents, of a machine which has been endowed with a learning experience as close as possible to the imperfect human mind and memory system, we would only have successfully simulated our imperfect tools of computation - not to downplay the difficulty of such a feat. As Sofia notes, this would be getting at a definition of thinking as thinking does, in other words it answers the "how" question. A machine of this type could perform the average computations a human performs, but certain characteristics of the human brain cannot be simulated, cannot be expressed with the use of a random generator or the averaging of past life experiences. Simulated thought is not equivalent to that produced by every one of us.

      Delete
  5. Turing discusses the Argument from Consciousness, as one of the many views contrary to his own. In the Argument from Consciousness, Professor Jefferson claims that until a machine is capable of composing a concerto or experiencing a particular emotion in response to a particular event (ex. pleasure from success, warmth from flattery…), then can it be concluded that a machine is equivalent to a mind. This view is one I personally hold as well, and struggle with when turning to the mind-machine debate. A machine must feel real emotions, not simply those as a product of mere programming and symbol manipulation, in order to be considered equivalent to a human.

    Turing counters this argument by stating that the only way to then be sure any human or machine is thinking is to actually feel yourself thinking. He states, “A is liable to believe ‘A thinks but B does not’ whilst B believes “B thinks but A does not.” Therefore, we can actually not be certain that any HUMAN (or machine) aside from ourselves is thinking.

    While this is a strong counter-argument for the Argument from Consciousness, at the end of the section Turing admits, “I do not wish to give the impression that I think there is no mystery about consciousness.” Based off this admission consciousness is a poorly understood and uniquely human trait, and thus even if a machine can pass the Turing test, don’t machines still lack this uniquely human trait (and are therefore NOT equivalent to minds)?

    ReplyDelete
    Replies
    1. Continuing with what Laura suggests, Turing goes on to conclude that "most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position". This argument -- to me -- seems like a cheap way of bypassing a much larger and more important argument at hand. I find it difficult to completely disentangle "thinking" from "consciousness", as being able to 'think' in the sense that Turing suggests seems to me to require a notion of consciousness -- instead of merely providing outputs to programmed algorithms. Because of this, I am encouraged to disagree with Turing in his suggestion that it is not needed to solve the mysteries of consciousness in order to answer the question of the paper.

      Delete
  6. I was most interested in Turing's description of the "Learning Machine" in the final section of his paper. I think the concept of a "Learning Machine" is the closest any author I have read has come to describe a machine that could learn consciousness. I also think that it is the best challenge to Lady Lovelace's objection that machines can only do "whatever we know how to order it to perform". Just like Turing proposed we rephrase the question "Can machines think?" at the beginning of the paper, I propose we rephrase Lady Lovelace's objection into "Can machines learn to create and perform independently of us.". Though we are teaching this machine how to think, Turing's model provides the most potential for the machine to eventually create independent using what it has learned through the learning process. What I think is interesting about creating a child-like machine, and why I think this was not the programmers' approach from the start is that it requires a measure of humbleness on the programmer's part to not design a machine in his or her image, but instead in the image of a child's. It seems counterintuitive to leave "empty pages in the notebook" as Turing so aptly describes. Finally, it is difficult to acknowledge that a machine could have the potential to learn like a human learns, and possibly at a much more accelerated rate.

    ReplyDelete
  7. "In this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure. By observing the results of its own behavior it can modify its own programmes so as to achieve some purpose more effectively." I believe the argument made here by Turing although clever, does not truly tackle the question at hand, which is effectively, would a machine be able to think about itself in relation to others, what the social impact of its interactions with people would be? On another hand, questions of would a machine be able to recognize jealousy/envy/pain between two people in which it was not blatantly obvious? Would it be able to create logic for weighing up morals and laws and coming to a conclusion about what is justice in the same way a judge does on a daily basis? These are still pressing questions that don't deal with "feelings" but aligns more with being perceptive about the world and responding in a dynamic matter.

    ReplyDelete
  8. It would seem that “The Argument from Consciousness” is seeing consciousness as being necessary to the process of thought (or whatever a human brain does): emotions as being intertwined with thought; thought requires ‘feeling’; thought is inherent to this concept of consciousness; and thought is more than “the chance fall of symbols” or “merely artificially signaling”. Then that the imitation game wouldn’t necessarily demonstrate thought (aka problem with the test).
    The counter argument, from my understanding, is saying that it is too complicated (right now? At this stage?) to try to demonstrate this concept of ‘consciousness’ or ‘thought’ (because you’d have to be the thing and feel yourself ‘feel’ yourself ‘thinking’ to get at that…which inherently, can’t be outwardly proven anyway) BUT can demonstrate more than just “artificially signaling” – an almost undeniable demonstration of thought via responses (the whole “winter’s day” interaction). So, kind of saying “let’s put this whole consciousness/feeling thing on hold because we don’t need to solve this before we solve whether machines can even get to this Turing test passing point first”.
    I find that interesting (read: confusing), knowing that Steven says we probably can’t have a T2 passing machine without it also being T3 passing, and knowing Steven believes that our T3 passing MIT robot, Isaure, shouldn’t be dismissed as a zombie (without ‘feeling’) because it is hard to believe she could be T3 passing and a zombie (aka feeling may be inherent to this?).
    So, the counter argument is basically saying the initial objection is right to think ‘feeling’ is important, but wrong in that it jumps the gun in stating a machine couldn’t ‘feel’ therefore think – that we can break them into steps (by what we can currently prove) even though they’re so closely connected.

    (Sorry for the somewhat confusing rant format this inadvertently took)

    ReplyDelete
  9. “Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. “ (page 6)

    I can’t understand why Turing claims that all digital computers are equivalent, and that electricity is not of importance. At this point in the text he had just finished describing how Babbage’s machine was so much more simple than the computers that are used today, and a huge part of that comes from the store, executive unit, and control. From what I can understand, Babbage’s machine is a primitive form of a computer, which is why it is being labelled a computer. But surely it is not equivalent to a computer? I don’t see how this point can be used to try to suggest that electricity is not of theoretical importance. I can see that he’s trying to relate the electricity in the modern computer to the electricity in the nervous system, but as he says this is only a superficial similarity. Am I missing something?

    ReplyDelete
  10. “Following this suggestion we only permit digital computers to take part in our game” (page 3)

    Turing talks about how the Turing machine is limited to non-living things. Can we extrapolate this to mean that he is limiting the Turing machine to a maximum of a T3 (i.e. no T4 or T5)?

    ReplyDelete
  11. If Turing is concerned with the question “Can machines think?”, instead basing a whole game around the parameters of the imitation game? After all, a machine passing to be human may not be thinking in the same way as humans are ‘thinking’. It may be doing something else entirely, that which it is programmed to do…and can that act be equivalently defined as the act of thinking humans do?

    In my opinion, Turing does not satisfactorily refute the consciousness objections enough so that even if we were to come up with a machine that would mostly likely deceive humans in the imitation game, we could be comfortable enough to grant it as a thinking machine. Machines have an a sense of linearity that human as they constantly grow and evolve beyond one’s own lifetime have that machines simply don’t. Machine do what they are told to do programmed to do, without understanding.

    ReplyDelete
  12. I do not believe that machines can think and I don’t really adhere to Turing’s arguments. I think that even if the speed and storage of the machine is optimal, there will be no way to create something that is indistinguishable from the human. I am not only talking about the skin (external part), but also about the internal part of the machine and its self-awareness. As mentioned in the article, a machine will never be able to feel or experience internal state which are influenced by the internal or external world. Although it can imitate us having different state, not truly feeling it wears of the human part of the experience.
    I also think that a machine will always take the most intelligent decision, but could we implement humanity and compassion in it? Probably the machine would be able to answer with kindness but not with humanity. I also do not believe that it would be able to act intrinsically, because everything will be mechanical and programmed. The motivation of the machine cannot be the same as a human because the life experience will never be the same.

    ReplyDelete
  13. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.”

    In regards to this, I think it would be wrong to have this approach when talking about the Turing test. This approach ignores the nativist approach of a child’s brain and learning processes. To think that a child is easy to mold because it’s a blank slate would go be defined as a tabula rasa approach. It ignores the neural networks already established in a child’s brain. To say that a machine has the capacity to learn the machine would have intention, internal state and a feedback mechanism. I think the situation is more complex then saying that a child’s brain is blank and programmable. Would it also be intelligent to create this in artificial intelligence when we know about how psychopathy works. If we create a machine with no internal state and feedback mechanism then wouldn’t we create a machine that could be psychopathic since it doesn’t feel?

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...