Saturday, January 6, 2018

1a. What is Computation?


Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overfiew: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)

Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences3(01), 111-132.



Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

67 comments:

  1. “In Western culture, we tend to take our capacity for thought as the central distinction between ourselves and other animals.” (What is Computation)

    If cognition is the interaction between multiple cortical areas, and such cognition gives rise to thought, considering the fact that mammals are the animals with cortex, then mammals should have the capacity to think, have thoughts. Different species of mammals differ in their amount of cortex, and the more cortex a mammal has proportional to the size of its brain, the more advanced the mammals are cognitively. Since the amount of cortex an animal has dictates how abstract their thoughts can be, the thoughts other mammals have may not be as complex, abstract, and advanced as humans, since humans have the most evolved and large cortex of all mammals (proportional to our size). However, other mammals should nonetheless have the ability to have thoughts, and the closest mammals in terms of cortex to humans, such as monkeys, should have the ability to have some form of “advanced” thought. Furthermore, cognition is what allows our brain to do our “doings”, and inner thought, as I understand it, is a doing. Thus, other mammals should be able to have thoughts. Therefore, is the idea that thought is what separates humans from other animals correct? I believe it should have been written: humans’ ability to have complex and abstract thoughts is the central distinction between humans and other animals.

    ReplyDelete
    Replies
    1. Cognition is what is going on in your brain that generates your capacity to do what you can do. "Thinking" is a good enough words for much of it. Human and nonhuman animals can do lost of things, so it is likely that they think.

      It feels like something to think, though, so we can't be sure (with either human or nonhuman animals). As to amount of cortex and abstraction, that's all rather vague. What's "abstraction"? And what is the evidence that amount of cortex is related to how much of it we can do?

      Computation is one of the candidates for what thinking might turn out to be. If it is right -- or to the extent it is right -- it applies to both humans and animals: But what is computation?

      Delete
    2. Attention everybody: This week we are addressing the question "What is Computation" (which is the same as the question "What is a Turing Machine."

      We are not yet on the question of "What is the Turing Test?" That's next week!

      Delete
  2. “A physical symbol system has the necessary and sufficient means for general intelligent action.” (Newell and Simon, 1976) [What is a physical symbol system].
    I find it rather interesting that Newell and Simon could put forth so broad a hypothesis. This implies, as the article suggests, that any intelligent system must be capable of symbol manipulation. This in itself is not so outrageous a claim because one only need look at human thought processes to see symbol manipulation (of one form or another) at work. Most operations that the brain carries out can be reduced to a Turing Machine which is a perfect symbol manipulator. The bolder claim that this hypothesis implies is that a system capable of no more than symbol manipulation would be considered intelligent. My first question becomes: what exactly do Newell and Simon define as ‘intelligent’? Today, a laptop is capable of symbol manipulation at a far faster rate than the average human mind but we do not consider a laptop 'intelligent' in the normal sense of the word. This claim also leads me to wonder if Simon and Newell would consider a Chalmerian zombie intelligent. I would suggest that their hypothesis could do without the word ‘sufficient’. Symbol manipulation is certainly necessary for intelligence but I am not convinced that it is sufficient…

    ReplyDelete
    Replies
    1. (1) What do you mean by intelligent?
      (2) How would you know whether something was a zombie? Is Isaure a zombie? How do you know?

      Delete
  3. Computation according to Horswill seems to be fundamentally a question-answer process. He states that there is a concept of behavioural equivalence that means that as long as the question is answered correctly all methods that lead to that answer are equivalent. However, this leads me to the question of comprehension, should understanding the way that the computation provides information (reduces uncertainty) increase it’s inherent utility and value, or does it not matter? An example would be me inputting arithmetic functions into an alien technology that seems to be spitting back the correct answers. Is this form of computation as valuable to me as the same result from a regular calculator (from which I can understand how answers are generated). Does having understanding of the mechanisms behind the computation make it inherently more valuable, therefore going against behavioural equivalence?

    ReplyDelete
    Replies
    1. Good question. You are asking about "underdetermination" of theories by data. More than one theory may explain the data. (And more than one formula may solve a maths problem.)

      Early in the day, when a theory explains only a bit of data, the right way to try to make sure the theory is the right one is to scale up, and see if it can explain even more of the data.

      But what about when it explains all possible data? Could there be more than one theory that can do that? for astronomy? physics? Is the simplest theory that explains it all the right one?

      Maybe there's no way to know, because physics is nowehere near that "Grand Unified Theory of Everything," so there's no way to know whether someone will think of another one.

      But what about Isaure? She can do eveything we can do:

      Could she be doing it all the "wrong" way?

      Delete
  4. I find the cochlear implant example in Horswill's reading very interesting. I think it is amazing that these computational models helped us understand the neural system underlying this function and that this is now used to treat hearing problems. It seems like for now we are able to understand brain systems that involve direct input from our senses, like audition and vision. How about all higher functions of the brain? For instance, how is computational neural modelling helping us understand how we can have private thoughts. This is the real challenge. Is being able to recreate a human process on a computer really telling us anything about how it’s happening in our brains? Couldn’t there be multiple ways to achieve a same result?

    ReplyDelete
    Replies
    1. Neural network models are able to learn, and hence are one possible explanation of how the brain learns. But there may be other ways. And the learning so far is not yet human-scale (in things humans can learn). What if the model can do everything we can do (as Isaure can)?

      Delete
    2. Prof Harnad, you asked what if the model can do everything we can do (as Isaure can)? But how could we ever know that Isaure is doing everything we humans do? A robot could simply have been programmed to answer "yes" to the question "do you have private thoughts". Even if the result is the same, the process isn't and I don't see how we could get around that problem, if the goal is to understand both how and why the brain feels and does.

      Delete
    3. Anna, When we discuss what computation is on Tuesday, you will find that it can "do" just about anything (symbolically), including feedback loops.
      Emmanuelle, But Isaure doesn't just answer yes/no questions. She can do anything you and I can do. Talk to her and you'll see!

      Delete
    4. Emmanuelle,
      from my understanding, the only way to know that Isaure would be able to do anything humans can (let alone everything) is if we actually *understood* everything humans could do - if we had the necessary and relevant explanations. That would in principle be the only way of making Isaure in the first place! That, in my mind, is the biggest (and only relevant) obstacle to making non-biological people (i.e. general artificial intelligence). Isaure and all her capabilities wouldn't just suddenly appear unless the programmer/design people at MIT knew what they themselves were doing - unless they had an explanation of what it is that people do and how they do it. (Well technically it could I imagine Isaure *could* eventually jump on the scene as she is to do everything humans could - because it once happened to biological people -> evolution. But if we could simulate an evolutionary process to eventually make humans, we would still need to know about the process explicitly to be able to program it.) The point of all this is to say, if Isaure can do everything we can do, I think she effectively would *be human*.

      Delete
    5. Hugo, I think you're not quite apreciating how much the TT really demands: the capacity to do anything Isaure or any of you can do (or say). And the purpose of the TT is to test whether a candidate mechanism would provide that ability. The TT is passed if people can't tell the candidate (Isaure) apart from any other person (for a lifetime, if necessary). You don't need to know how it's done in advance, otherwise there would be no need for the TT! The TT is testing the cognitive scientist's model.

      (We are nowhere near building an Isaure, though, even though it's an "easy" problem...)

      But why are we talking about the TT? This week is about what computation (and the Turing Machine -- not the Turing Test) is!

      Delete
  5. Horswill explains a computational problem as “a set of possible questions, each of which has a desired correct answer”. Does this mean that we are not computing when asked open-ended questions? If not computation, what kind of problem-solving are we employing? When we type an open-ended question into Google and we get thousands of possibly related links as an answer, did the computer not just compute multiple answers for one input? If this is true I do not think that this definition of a computational problem holds.

    ReplyDelete
    Replies
    1. I think Horwill is referring to the binary nature of computatio. All compex symbols can be reduced to simple binary (0/1) ones. But the Turing Test is not just quations and answers. You can say anything about anything to Isaure. And besides, she can do much more than just talk: Indeed, we'll find out that words are grounded in nonverbal experience (especially learned categories).

      Delete
  6. “By modeling neural systems, we can better understand their function.” I agree that a great deal of knowledge can be discovered about the nervous system by dissecting it into elements and then analyzing small computational circuits, where “one neuron stimulates another.” This knowledge can then be used to map out networks and such. This approach, however, only explains “the doing” function of the nervous system. Ultimately, studying the nervous system as a computational device may steer us from understanding other functions of the nervous system, specifically as they relate to the hard problem of consciousness – how and why the brain feels. That is, the nervous system is dynamic, which means that its outputs are not systematic, nor are they fully “predictable”. Input, as it travels through the nervous system, will never be represented in the same way as communication between neurons is merely a probability of events. For example, the amount of neurotransmitter released or reaching the post-synaptic terminal during neuronal communication is never the same. Thus, the nervous system will not produce “behaviourally equivalent” outputs given the same input and therefore should not be approached as a fully computational device.

    ReplyDelete
    Replies
    1. Anna you make a good point about the difficulty of computationally modeling the nervous system due to its dynamic nature. However, Horswill's statement "When one neuron stimulates another neuron, it predictably increases or decreases the rate or likelihood of the second neuron stimulating other neurons" I believe points to the fact that neurons of a certain type (e.g. cholinergic neurons) will have specific effects on the post-synaptic terminal based on the receptors that the neurotransmitter (acetylcholine) binds to. If the acetylcholine binds to some receptors (excitatory) the rate/probability of the next neuron firing will increase, while the reverse will happen if the receptors are inhibitory. Therefore, in a system where the specific combinations of neurons and receptors is well understood (such as the cerebellum), and the threshold excitatory potential for firing is tracked, a computational model could conceivable simulate the behavior of a neuronal population. I could especially see this occurring due to advances in machine learning, where a learning data set could possibly be used to train the model on the average quanta of neurotransmitter released at different synapses given certain excitatory inputs. Of course, as Horswill notes, we are very far from developing this sort of model at the whole brain level.

      Delete
    2. Anna, computation can simulate, formally, just about anything, including the rules (algorithms) that get from input to output. It can't "do" physical and chemical dynamics -- physically and chemically -- but it will need a little more argument to show why it would need to, as long as it can generate the right output to the input (as Isaure does).

      Marcus, here's a hint: look for what computation alone cannot do between the world and the input, because you are right that it is capable of simulating just about anything going on inside between input and output.

      Delete
  7. Newell and Simon’s physical symbol system hypothesis states that something can be considered intelligent if it has a physical symbol system in place. The article also mentions that the symbols which are part of this system are actual real world objects. Although associative learning is an important element of overall cognition I don’t think that we can say it is the only necessary element for a machine to be truly considered intelligent because intellect encompasses much more than just what we can build from physical associations. Can we really consider something to be truly intelligent if it only deals with the physical world? The implication from this hypothesis, also mentioned in the article is that anything that we would consider intelligent is a physical symbol system. Is it true to say that the intellect of human beings can be reduced to this kind of symbol manipulation? I don’t think that we can consider human beings to be functioning solely using this kind of system given all the things that we can eventually come to know, that is any knowledge outside of physical sensations (sight, hearing, touch etc…). We must be combining a physical symbol system with some other mechanism because if not our cognitive capacity would be necessarily machine-like.

    ReplyDelete
    Replies
    1. Maria, do you agree that whatever is going on inside Isaure's head to make her able to do everything we can do provides a (potential) causal explanation of how we do it? And if computation alone can do that, then computation explains it. To show computation couldn't do it, you need more the intuition that there is more to cognition than just computation! (In the end, we will come to the conclusion that computationalism -- "cognition = computation" -- is wrong. But right now we are still at the stage of appreciating how powerful computation really is. Then we'll need (powerful) reasons for why it's not powerful enough.

      To start, we need a clear, kid-sib explanation of exactly what computation is.

      Delete
    2. Sir, I may be off on this, but I am trying to connect this idea of the power of computation and the idea of simulation and how it is being used to eventually show that computationalism is wrong.. Would it be right to say that another way of thinking about 'computation can simulate almost anything' aka the Church-Turing thesis, is to look at Isaure as an embodiment of simulating the human experience? To clarify, is the idea behind you presenting the Church-Turing hypothesis effectively you posing the idea of Isaure being this robot that does everything that humans do, and thus if she could truly exist than it is the equivalent of proving the Church-Turing hypothesis right and therefore that cognition is entirely computation?

      Delete
  8. Setting aside questions concerning consciousness, the easy problem is centered on how and why the brain generates doing all the things we can do, apart from feeling. It appears that this cannot be answered by computation alone, since the incomputability of the halting problem suggests that the brain is not like a computer due to the capacity to solve certain problems that have yet to be successfully solved by Turing machines. Nonetheless, “What is Computation” additionally mentions the fact that like a computer, humans have the capacity to solve such problems but remain subject to error. I find this to be extremely relevant in considering the psychological and social issues that arise with respect to computational systems and particularly, those associated with information technology. Though it is clear that information technology presents new types of risks that are of increasing importance with the transfer of human intellectual labour to machines, these are not anymore worrisome than the hazards that characterize processes that have yet to be automated. As cognitive scientists focus methods on passing the full robotic version of the Turing Test, possibilities for eliminating these risks in automated programs may arise. Subsequently, it is easy though troublesome, to imagine how this could lead to the prospect of altering the brains’ computational processes. This is significant in light of the discourses that have arisen in the Western world with respect to computation and simulation, which have centered on how much our brain and thoughts can be simulated.

    ReplyDelete
    Replies
    1. Sofia, How does the non-computability of the solution to the halting problem show that computation cannot be what's going on in Isaure that makes her capable of doing everything she can do? After all, she can't compute the solution to the halting problem either. What's an example of a computational problem that a human can solve but computation cannot?

      Kid-sib couldn't follow your point about risk: We're just asking whether computation could be what's doing the job inside Isaure's head.

      Delete
  9. In "What is Computation", Ian Horswill states "any system whose behavior is predictable can be simulated, or at least approximated, in software". Take note of the word “predictable”. I believe this is where the problem of a computer simulating a human fails because humans are not entirely predictable. At the root of it, we are predictable in a way that we will do anything to survive. For example, we drink when we are thirsty, eat when hungry, and sleep when tired. However, more complex behavior such as deciding what to eat or where to sleep can be quite unpredictable. Hence, asking a computer what to eat today would be a very difficult task for it to do as it would have to take into account incredible amounts of data about our likes and dislikes, to eventually end at one decision. Nevertheless, big companies like Facebook and Google are constantly receiving enormous amounts of data each day, which allows them to create quite predictable programs.
    I find this in one part a little scary seeing as these companies have enough data of mine already to know me better than I know myself, but also quite fascinating as it allowed for these types of technologies to advance alarmingly fast. I am looking forward to the next 5 years in seeing what type of advancements will have reached the popular market. Maybe strong AI isn't so far out there after all?

    ReplyDelete
    Replies
    1. Isaure, it's fun speculating with you about what MIT cognitive engineers may or may not have put in your head that allows you to do everything you can do that we can do, include discussing whether it could be just computation.

      Predicting everything that a computation -- say an "algorithm" or computer program -- will do is possible only if the program is short and simple. Otherwise the program (if the hardware functions well and the program can really do everything it's supposed to do be able to do) may be determinate, in the sense that it really does what it's supposed to do, but it doesn't mean that we can predict what it's going to do: A theorem-proving program may be able to prove whether a theorem T is true of false, but perhaps even the programmer, the one who created the theorem-proving algorithm, cannot predict in advance whether T will turn out to be true or false.

      Besides, if all you want is that the system should make errors now and again, there's no problem adding that to the program too.

      So if computationalism (cognition = computation) is wrong, it's not because of predicatibility.

      Delete
  10. When people say thins like "it's been argued that the real universe is effectively a computer, that quantum physics is really about information, and that matter and energy are really just abstractions built from information" or the discussion of computational neuroscience, I wonder whether this is simply a way we are representing abstract ideas because our current way of thinking is so computer based or whether these abstract phenomena actually work in this manner. A more simple example would be that of our understanding of cells and their subcomponents; mitochondria- power house of the cell, endoplasmic reticulum- the transport system. This is similar to factory line thinking which is a real world, tangible way of visualizing these processes. Maybe this is due to the constraint language puts on our understanding but it seems that our understanding of new things is fueled by the current popular way of thinking. Right now, the popular way of thinking is very computer based, so therefor we begin to see things functioning in this way. We then continue to redefine words to fit this new way of thinking like procedure, and information processing in this article, and then we use these words to explain things like out thinking abilities. But this can completely change depending on what is current and hip in our thinking.

    ReplyDelete
    Replies
    1. Daria, probably we are partly biased by the fact that there's so much computation going on around us these days. But the view that computation can do just about anything (symbolically) is not based on that.

      What is computation?

      Delete
  11. in "what is Computation" it is stated that " Intelligence is ultimately a behavioral phenomenon. That is, an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance such as living tissue or a soul"
    Although there is a behavioural component of intelligences, I don’t believe that this type of definition encompasses all areas of intelligence. Computers fooling humans into thinking they are human is impressive but doesn’t necessarily make them intelligent. Children with ASD can show low behavioural intelligence and still show strides in other areas. This doesn’t make them less intelligent just because they express themselves differently or show lack of behavioural intelligence. I might get nervous and answer “yes” to someone asking, “when’s your birthday?”, but (I hope) this doesn’t necessarily mean that I am unintelligent just because I had a behavioural mishap. The reverse could be said for someone who is extremely good at gauging social cues and interacting with others in social settings but has no arithmetic skills. If we create a robot that can convincingly simulate human behaviour, does this make it automatically intelligent?

    ReplyDelete
    Replies
    1. Kayla, remember the "hard" and "easy" problem. The easy problem is explaining how and why we can do what we can do. Doing is just behavior. The question is whether what's going on in Isaure's head is just computation. ("Intelligence" is just another word for cognitive capacity = capacity to do things).

      The hard problem is explaining how and why we feel. We're not going to solve that in this course, though we will use it.

      Can computation be what is going on in Isaure's head that generates her capacity to do everything she can do?

      Before saying whether it can or cannot, tell kid-sib what computation is, and then why it can or can't do what.

      Delete
  12. So if AI machines pass the Turing test, why do we as humans find them so unnerving? Wouldn't this imply that they in fact have not passed the T-test, and thus are not "intelligent?" Horswill makes the apt point that if we can computationally simulate individual neurons, we can eventually simulate an entire brain network. But would this make the brain equivalent to a human brain, or in fact create an entire "other" species? In lecture, we discussed the Isaure-as-an-intelligent-robot problem, that is, if we would consider her "human-like" if she had passed the T-test. I think this kind of thought exercise puts people on edge because they are assuming that by reaching human levels of "intelligence," the entity needs to automatically be a human. However, there's an important distinction between simulations and objects, namely that as close as one gets to accurately simulating something (say, a human brain), it will never actually be a human brain, despite it's human-like level of intelligence.

    Additionally, I've always been curious as to why people (humans) are so inherently mistrustful and wary of AI machines (the old argument of "robots are going to take our jobs," for example). Do you think that there is an explanation for this phenomenon, other than a possible biological response of feeling threatened as a species?

    ReplyDelete
    Replies
    1. Juliana, "mistrust" is not what's at issue here. Use your imagination. If Isaure really were an MIT robot, but otherwise behaved exactly as she does, for a lifetime, how long do you think this "mistrust" would last? (And where would it be if we had no idea she was a robot? -- which is the way the Turing Test is supposed to go...)

      Delete
  13. If computers are Turing complete and are able to stimulate any known computer (and therefore compute anything we know how to compute) raises the question of what other things, including humans, are able to do this. Being universal, this means that stimulating the Universal Turing machine if essentially stimulating all computers. I wonder if this means that humans also possess the property of universality.

    ReplyDelete
    Replies
    1. Rachael, it is not computers (the hardware) that are universal, it is computation (i.e., what can be done by the software). When humans do computations, they are being Turing Machines. (That's what you will find Searle is doing, in 2 weeks).

      Delete
  14. “...If a computer could fool humans into thinking it was human, then it would have to be considered to be intelligent even though it wasn’t actually human. In other words, intelligence is ultimately a behavioral phenomenon. That is, an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance such as living tissue or a soul.”

    I found the idea of a certain behavior representing intelligence really interesting, but I was wondering, in terms of the turing test, if all behaviours that lead to a human being deceived by a computer can be considered intelligent. For instance, I read that there is a chatterbox called Eugene Goostman that represents a 13 year old, Ukrainian boy that passed the turing test. Although the chatterbox made grammatical mistakes and didn’t always understand the context of the conversation, humans still believed that it was a young boy who made grammatical mistakes because of his lack of understanding the language and didn’t understand certain contexts due to his age. Can we consider the behaviour of the chatterbox to really be intelligent as it seems to pass to humans due to its associated personality and not its functional skills?

    ReplyDelete
    Replies
    1. Maria, Isaure is not a chatbot, and no chatbot has yet passed the (verbal-only) Turing Test.

      But why are we talking about the Turing Test? That's next week. This week is "What is Computation?"

      Delete
  15. Horswill draws attention to the idea of behavioral equivalence in this paper, where regardless of the procedure used to generate the output, all we care about is whether they compute the same function. With this in mind, last weeks discussion on Zombies becomes a conversation on whether or not feeling is a required part of the procedure. If we assume feeling to be part of the procedure, then it seems like we cannot also have feeling be an output of the procedure, since if that were the case you would need to already be aware of the output for the procedure to work in the first place. Is this along the lines of what was said in class about there being nothing left to explain the hard problem if we answer the easy problem?

    ReplyDelete
    Replies
    1. Once we had a causal mechanism that could do and say anything and everything we can do and say, there is nothing left to explain that we can feel. (Even if there were a location or activity pattern of the brain that was correlated only with the capacity to feel, and completely uncorrelated with the capacity to do, it would not explain how or why we feel. -- In fact, if you are not a dualist, you have to assume that some brain structure or activity causes feeling, but the hard problem is explaining how and why, because otherwise feeling would be causally superfluous.)

      But this week is on computation, not consciousness...

      Delete
  16. Asking some questions: Must computation be accurate? What things can do computation? Can computation exist without a thinker? What is the context for the mathematically precise limits on computation?

    Computation seems to be a way of looking at the world so that for whatever definition of computation the thinker using, the world meets that definition. (church turing thesis definition for example). I could use a desktop computer in a way that does not do computation if I so choose, (as a lamp for example) in contrast, I could see ascribe meaning to certain dynamical processes so that they match my definition of computation. The sea level could compute the global climate warming so long as I understood the relationship between temperature and sea level warming, however, this computation could be wrong if I didn't realise that someone had been secretly draining the ocean to deny climate change. Here we stray from the Turing definition to ask the question, must computation be accurate? Could I compute the flipping of a coin based on the sea water levels. I would expect to be wrong half of the time, but how important is that? Even when I am correct, that would be because of chance, not because a relationship existed between coin flipping and water levels. So are there certain things that can not act to compute certain information? If you can do arbitrary computations that have no more meaning than the last example, it seems anything can be computed. This is similar to saying any finite proposition can be proven by a formal system. Yes of course it can, take it as your first axiom. Conversely one formal system can prove everything, with two contradictory axioms and the rule that <>. These powerful systems can become trivially vacuous if we are not careful. Likewise, take your halting machine and change your parameters. When the machine does not terminate, this means that the halting problem has a solution. Or this means it doesn't have a solution. Or it means 2+2=4 and the sky is blue. Or it just computed that the machine doesn't halt. (this last one seems to be some sort of identity computation, the thing doing the computation just computes that that thing is a thing doing the thing it does. Always vacuously true, according to interpretation). With thinking, we can make any object compute anything we want, trivially. The limits seen in the readings only come into play when you fix yourself to an interpretation. Anything can be computed, but one computational interpretation cannot compute anything. This is because there are questions that go beyond the finite linear scope of mechanical procedures and our ability to interpret them. Well formulated mathematically precise questions that we can conceive of that are beyond the scope of a meaningful computation. This makes the question of computation very much a question of the types of questions we are wondering about, still very much linked to the thinker.

    If you skip all of that, at least consider that walking across the room is not computation because we don’t conceive of it as such, not because mechanically walking is so different from bending fingers to count to ten. Given an interesting enough interpretation, walking could compute something (number of steps it takes to cross the room).

    ReplyDelete
  17. During Plylyshyn's explanation for why cognition should be viewed as computation he states that "they both exhibit the same sort of dual character with respect to providing explanations of how they work - and for the same reason”. Is this a reference to the easy/hard problems?

    For both computation and cognition we can describe the processes of HOW they work (the syntax if you will), but when we try to explain WHY (the meaning/semantics) we hit a dead end.

    For example, Plylyshin goes on to say that “by separating the semantic and syntactic aspects of cognition, we reduce the problem of accounting for meaningful action to the problem of specifying a mechanism that operates upon meaningless symbol tokens and in doing so carries out the meaningful process being modelled (e.g. arithmetic)... In other words, the machine's functioning is completely independent of how its states are interpreted”.

    To me this is Plylyshin explaining that to understand cognition and computation we must split it into the easy and hard problem and focus on the easy problem (because it's easier that way), but I’m worried that I'm mistakenly equating meaning and feeling.

    ReplyDelete

  18. In “What is Computation,” Horswill the concept of computation as not just tied to numbers, acronyms, punctuation, or syntax, but as a question-and-answer system. By this, he stresses the concepts of input values (information specific to the question), and output values (desired answer) which form the functional model of computation. This model comes with its own limitations, however, which suggests the imperative model may be a better model with which to define computation. In the imperative model, the procedures in question are sequences of commands that cover any manipulation of representations regardless of inputs and outputs (Horswill, 2008). It is this description of the process of computation that interested me the most because it seems to me it could a step in the right direction as far as understanding how individual’s do things. By steering away from the straightforward input-output model, we can gain a better understanding of the sub-steps involved in a process which may be able to give us insight into how things occur. That being said, in Reading 1b, we discuss the idea that computation itself cannot explain consciousness.

    ReplyDelete
    Replies
    1. Although the imperative model of computation is less restrictive because it covers any kind of manipulation of representations, not just the relation between inputs and outputs, Horswill makes the point that ”it can be difficult to specify what a program is supposed to do, much less determine whether it does it.”

      It sounds like the imperative model is much harder to reason about. If it’s already hard to figure out what occurs, then it should be even more difficult to understand how things occur. Thinking about it in this way, isn’t the imperative model just as restrictive as the functional model, albeit in a very different way? I don’t think Horswill is implying that one model is better than the other; rather, he is suggesting there are there are two different ways to tackle computation, both of which are unable to fully explain computation due to their limitations.

      Delete
    2. Tina, your comments about the advantages and disadvantages of the functional and imperative models of computation make me wonder how runtime errors would be represented by a machine running each of these models. I am inclined to believe that computers running a functional model of a procedure generally terminate when they encounter an error. However, because in the imperative model "the procedures in question are sequences of commands that cover any manipulation of representations regardless of inputs and outputs,” the imperative model may be more advantageous as by definition it seems to allow for exceptions, and perhaps backtracking in the case of error.

      Delete
  19. This comment has been removed by the author.

    ReplyDelete
  20. Horswill raised an interesting question that was also addressed in class which has to do with “artificial agents becom{ing} more life-like” and its consequence on our perception of them. As artificial intelligent is becoming more complex and life-like, will a robot or computer ever be seen as a real person or having the same rights as one. If a robot only purpose is to move boxes and it does not think of anything but its task there is no need to fight for its right, even if he is being tirelessly used and discarded once it malfunctions. This robot has no feelings or greater purpose. A robot would need more than just a simple purpose to get human compassion. As a robot looks and acts more human-like, it is hard to imagine being able to distinguish it from humans. Perhaps we would feel the need to give it human rights, such as working hours and wages to avoid slavery and abuse. Similarly, it would seem to be harder to harm it for no reason, if it acts and expresses itself like a human (ie. Isaure). But the question remains of whether it feels or not? Does this robot feel pain? Feel sadness? Without the robot having feelings it becomes easier to see how we could harm the robot and not give it rights. Would this apply to computers that would be a fixed non-human representation of Artificial Intelligence (AI)? If a computer containing AI tells you that it can feel pain and sadness then would we not want to give it rights and freedom, overlooking the fact that it does not look anything like a human? I do not think that a robot needs to be life-like or even human-like to get compassion. After all we have compassion for animals and they do not look like us. This computer would just need to express feelings and independent thoughts. For me, giving rights to a robot is not based on the appearance but on the idea of it having feelings and some ability to think.

    ReplyDelete
  21. The idea that human consciousness could simply be computational processes, one that is indistinguishable from a computer, leads to a sort of existential crisis of free will. If our thoughts, and perhaps feelings, are the output of activations and deactivations of neural networks in response to input, then are our thoughts really our own and so unique from the thoughts of others? We want to say that yes they are unique and independent, but perhaps that is again the output we get from the human idea of being more intelligent than other species. Furthermore, if we are able to create artificial intelligence with the same neural networks and computational processes of the human brain, then might these AIs have the same feeling of free will and independence that we hold as being human?

    ReplyDelete
  22. In the article "What is Computation", an interesting topic of stimulation is discussed to expand the unfathomable possibilities of computers. Assuming the brain as a computational device, then the hypothesis that all the information that is encoded in the brain can be completely simulated by another computer could be naturally drawn. By this means, consciousness would be able to transcend its physicality and exists in a digital form. If an identity were to be uploaded on another computer, will it be ethical or unethical to treat it the same way as treating the original, since there does not seem to be any differences between the copies and the original except their forms? Further, taking what we have discussed in class, the easy and hard questions, into consideration, if feelings are something outside of computation, will the uploaded consciousness still have feelings? If they do not have feelings, should people treat them differently even though they behave entirely like the ones with feelings? If the uploaded consciousness does having feelings, meaning they are a result of computation, how and why do feelings exist?

    ReplyDelete
  23. In "What is Computation?", Horswill talks about computational neuroscience and the possibility of simulating a brain with a computer. He discusses how with our knowledge of the brain so far we are able to create simulations of neurons or even physical functions like hearing. It seems then that we could simulate all that the easy problem encompasses. The Church-Turing Hypothesis states that “any function that can be computed can be computed by a Turing machine.” If this is true then every function of the human brain encompassed by the easy problem (so everything except how and why the brain feels, or cognition) can be computed by a Turing machine. Horswill states that so far every machine that has been created can be simulated by a Turing machine. However it seems to me that cognition is uncomputable while everything else the brain does is, but because of that wouldn’t it be impossible for the brain to be simulated by a Turing machine? Admitting the brain is some sort of computer because it can perform computations; wouldn’t it be superior in some way to the Turing machine because it cannot be simulated by it? And if it can be simulated by it, what would it mean for the Turing machine? Would they be able to know what it feels like to perform its computations and switch states? What would result from this?

    ReplyDelete
  24. “Regardless of whether the brain is best thought of a computer, people are actually just like computers on the halting problem: we can solve it a lot of the time, but we also get it wrong sometimes.” (Horswill,2008)
    I slightly disagree with what Horswill states here regarding the halting problem.
    The halting problem occurs when there are errors in the programming procedure, it thus causes an infinite loop, indicating that the program will run the code repeatedly. As a result, the procedure won’t be able to give the desired output. I assume this sort of implies that once the program gets fixed, the halting problem would disappear and the program will be able to run successfully for every subsequent trial.
    However, in terms of human brain, as the brain finishes writing the procedure (in other words, learns and comprehends the procedure, or method), for every trial the brain executes the program, it might give the correct output this time, but it is not guaranteed the brain will get it right the next time. There could be a variety of reasons, our memory capacity could lead to different versions of the procedure for each trial, or we mistakenly use wrong inputs, etc. In sum, even if the procedure is correctly written, or errors in the procedure get fixed after unsuccessful trials, human brain still randomly and unpredictably makes mistakes.
    Lastly, I have a question regarding the section of computational neuroscience. Since computers can’t solve the uncomputable problems, how would the human brain interpret these uncomputable problems, and is the brain able to solve problems such as the halting problem?

    ReplyDelete
    Replies
    1. The halting problem is not a looping error in a program (nor the brain), it is determining whether or not a computation (algorithm) will ever reach a solution and halt.

      Solving the halting problem means computing whether an algorithm will halt. This is provably impossible for computations in general. For a particular algorithm it may turn out to halt (by luck), or it may be possible to prove that that particular algorithm will halt, but this cannot be proved for every algorithm.

      (And all of this has next to nothing to do with what computation is, and whether cognition is compuputation!)

      Delete
  25. I think that Horswill is arguing in a sense that cognition is computation. Firstly, cognition to me means understanding and processing something in the brain which is similar to how Horswill initially describes cognition as a question and answer format. Further into his argument, he describes how the brain can do computation and how neurons can possibly fit into a computational model. However, this is dependent on the predictability of the neurons which is impossible to know. In this sense, the brain is limited. Does this necessarily mean that it is not a computational? Computations carried out by even the Turing machine are limited as it cannot compute anything it can just compute anything that any other machine can compute but yet it is a form of computation. Just a quick thought, can this be compared to brains in the sense that every brain has the potential of computing what every other brain can compute but somethings are incomputable between all brains or is it more specific to an individual brain?

    ReplyDelete
    Replies
    1. If the brain is just doing computation then it is just the hardware for the computation. The reliability of the hardware is not a computational matter.

      What is something that can be computed that cannot be computed by a machine (given enough -- but not infinite -- time)?

      (Please make sure you understand what computation is.)

      Delete
  26. “In other words, intelligence is ultimately a behavioural phenomenon. That is, an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance such as living tissue or a soul. Moreover, intelligence is a computational phenomenon, amenable to computational analysis." (Horswill, 2008)

    In "What is Computation", Horswill discusses the popular association of computation with thought, and therein with "our collective humanity and of our individual identities". This idea is interesting in juxtaposition with the above quote; if intelligence is a behavioural phenomenon & we utilize computational analysis to better understand this assumption - does it follow that understanding the computation of a computer that passes the Turing test would mean that we understand the "how" of thought and feeling? This seems like quite a stretch. Despite the understanding that intelligence is not rooted in some special part of our living tissue or soul, it seems to simplified to work towards machines that, to our best understanding, simulate all the neurons of our brain.

    Would this understanding of cognition as a computational problem require that there is no true free will or spontaneity? Would these models ostensibly recreate the outward appearance of human indecision or the sometimes unreasonable nature of emotional reactions? How could we assume that a model of all our neurons would actually model everything that goes on in the brain - this type of model would be fully limited by our own scope of knowledge.

    ReplyDelete
    Replies
    1. The easy problem is to explain how and why people can do all the things they can do and say. If computation can do that, then computationalism ("cognition = computation") is right, otherwise not. Neither computationalism nor causal determinism have much to say about free will, which is a feeling. It is even less likely that computation alone can explain feeling than that it can explain going.

      Delete
  27. Regarding What is a Physical Symbol System—
    "Choosing an appropriate level of abstraction..."
    Does this necessitate an optimization of abstraction? Would it be more efficient if AI (and humans?) have create and use models of our environment that are at the optimal level of abstraction? If so, would this be quantified at the most basic level of whatever is deemed ‘useful’ to the agent?

    ReplyDelete
    Replies
    1. What do you mean by "the optimal level of abstraction"? The goal is to explain and and why we can do all that we can do: see, hear, remember, reason, learn...

      Delete
  28. “…intelligence is ultimately a behavioral phenomenon. That is, an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance such as living tissue or a soul.”
    This article argues that in the near future, we may be able to construct simulations of ourselves, and that brains are similar to computers in many ways. In this passage, Horswill brought up Turing’s argument that if a computer can act as if it was human and fool humans into believing it was human, it would be considered to be intelligent. This begs the question of what being human means. Does it simply mean to have intelligence (to have the ability to acquire and apply knowledge and skills)? Of course not. It involves consciousness, character, and many other attributes that are unique to humans. Humans are born with innate skills that cannot be learned (universal grammar) whereas a computer is programmed to do such things. Therefore, computers can never really “fool” humans into believing they are humans, because they don’t act of their own free will. So are computers really intelligent?

    ReplyDelete
  29. What is a Turing Machine?
    - Reading this reminds me of functional programming where even defining, for example, an iterative function, one first has to define an encoding to represent the numbers needed to define the function. The fact that in school teachers instruct students to use high-level programming to implement simple functional programs is telling of the universality of turning computing.

    What is computation?
    - Behavioural equivalence reminds me of neural networks that solve queries/answer questions in ways that we cannot explain without explaining the neural network algorithm itself
    - When comparing computations, what about other metrics of computation such as time or energy used?
    - The idea that the brain is not a computer since it can solve the halting problem is so interesting! since we are able to ‘compute’ the answer to the halting question seemingly more abstractly than a mechanical computer can, does the human brain do computation on a higher level than turning machines? or maybe on a even more primitive level? what is the difference?
    - I wish this was required reading for COMP 202 and COMP 250


    Computation at 70, Cognition at 20
    - I agree that introspection does not form cognitive science, for introspection is not scientific!
    - So, would a turning machine capable of passing the turning test be able to introspect cognition? Unlike humans, it may be programmed in a way that it’s procedural knowledge is explicitly understandable to itself, unlike ours.

    ReplyDelete
  30. Computational neuroscience and Who I am ?
    I found this part of the article very compelling and somewhat contradicting the beginning of it where it states that “In Western culture, we tend to take our capacity for thought as the central distinction between ourselves and other animals.” By saying that simulation could simulate personality, thus identity, it goes against the first statement. I do not agree with the statement that our capacity for thoughts is distinction between and us and animals. I especially think this is an absurd statement to make if we consider that a computer could be able to simulate a brain and thus, thinking. By this very fact, we should consider animals as thinking beings.
    I was also triggered by the fact that some people argue that “we could live forever by “downloading” ourselves into silicon”. This seems to be a direct reference to the show Black Mirror that show how technology could rapidly dominate our world, and not always in a good way. Why would somebody want to love forever, to see everybody how was part of his/her life dying, moving on on him/her and continuing their lives. And will we give agency and identity to these non-existing people (who existed once but are now only fraction of what they were)? . And would we be able to describe the symbol level of these “persons” since we are not even able to explain the human’s one?

    ReplyDelete
    Replies
    1. Alina, I think you make some really interesting and thought-provoking observations here.

      I'd like to respond to the first part of your comment to make some observations about simulation. I think you’re conflating the ideas of simulating thought with actual thinking. As is shown by Searle’s Chinese TT argument, it’s possible to create a machine that simulates thought without any actual thinking occurring. In fact, that’s roughly what it means to simulate something: to represent something without actually being that thing.

      I don’t think that our ability to simulate thought makes our capacity for thought any less vital to our humanness. In fact, I think it speaks to how fundamental thought is, that we can only simulate thought, but not actually reproduce it in a machine.

      Although I don’t disagree with you that we can/should think of animals as thinking beings, I don’t agree with how you came to that conclusion. I don’t think that simulating thought reduces thought itself to something less complex or interesting. In other words I don’t think the your simulation argument equates what goes on in our heads to what goes on in animals’ heads. Please correct me if I’m misunderstanding your thinking and let me know what your thoughts are!

      Delete
  31. (What is computation?)

    "But many people argue that the brain is fundamentally a computer. If that's true, does it mean that (all) thought is computation? Or that computation is just thought?"

    Though I do not agreed that thought is simply computation, is it really wrong to think of our brains as being fundamentally similar to a computer? Or does this necessarily lead us to the conclusion that all thought is simply computation? Are the two assertions necessarily linked?

    "Changing our ideas about computation changes our ideas about thought and so our ideas about ourselves. It reverberates through our culture, producing excitement, anxiety, and countless B-movies about virtual reality and cyber thingies."

    In my opinion, a super interesting point here. Could all this confusion about computation and the idea of artificial intelligence simply be a labelling problem. Had the concept of computation as an innately human process not been so deeply entrenched in our societies before the information revolution, would there still be the same degree of philosophical upset over whether machines can think?

    "If we replace one computational system with another that has the "same" behaviour, the computation will still "work""

    This seems like the beginnings of a concept that resembles the Turing machine and other arguments for computationalism.

    "Often we can ignore this and focus on manipulating the "information" in the input rather than manipulating the components of a specific representation. Consequently, computation is often referred to as information processing and computing as the information technology industry."

    Interesting here how information is presented as the very base building block and it is implied that everything can be reduced to information. I have never seen "information" presented in this light before, but it does explain definitions such as 'information as the reduce of uncertainty' in a clearer way.

    It seems that the imperative model adds a level of complexity to the functional model in order to better study the problem of computation. I wonder how these models may eventually relate to our study of the brain's operations, and if any parallels can be drawn between either model so far.

    "As long as the procedures are behaviourally equivalent, they're in some sense interchangeable."

    This behaviorist statement is the best argument I have heard in favour of computationalism thus far in the course.

    "Thus far, every computational system that's been devised has either been Turing complete (can simulate and be simulated by other Turing complete system) or is weaker than a Turing machine (i.e. can't simulate a Turing machine but can still be simulated by a Turing machine)."

    I wish they elaborated further on this point in more technical language. It appears to me form what limited knowledge I have of computer science that once you achieve a Turing machine you can systematically create all other computing devices in its stead.

    My favourite approach to the study of computation is one based on behavioural output and behavioural equivalence. As an idea in flux, I believe that the definition of computation that most resembles its real life position is one relating to what systems (be them biological or technological) can produce from a given input. In this way we can easily think of brains as computers, as the only requirement for them to be so is the behaviourally output they produce for a given input.

    ReplyDelete
  32. (What is a physical symbol system?)

    Can a model truly be a model if it represents only part of the world? What was interesting in this piece was how the author place value on models of the world. Instead of placing the value of a world model on its accuracy, or level of "correctness", the author instead deems the value of a model to be in its usefulness. This is reminiscent of the Functional Model of computation discussed in the previous article, where systems are measured not by how they compute, but by the effectiveness of their computing. Is it not within the very definition of what it is to be a model to be able to accurately duplicate whatever you are modelling? Or is the function of a model enough to give it the title, even if it goes about the operation entirely differently?

    ReplyDelete
  33. (What is a Turing Machine?)

    I thought the proposition that a Turing Machine can do more than any physical computer was an interesting concept to play with. We are introduced to the Turing Machine as the almost the most simplified model for a computational device possible. The Turing machine is described as a simple input/output machine with only a handful of fundamental operations. Despite the Turing Machine having such a simple design its unbounded memory capacity and unlimited speed of operation allows it to exceed the processes of any real computer. I think Turing uses this example to strip down a computational device to the bare bone elements that are required of a computational device in order for it to meet the high bar that is set by the idealized Turing Machine.

    ReplyDelete
  34. Computational Neuroscience
    If we can write a computational model that predicts the firing of different neurons, then I assume that we can categorize the synapses for certain sensations into a sequence of synaptic events. For example, the sensation of tasting an orange will correspond to a discrete sequence of synapses.

    With this is mind, could this be a solution to the symbol grounding problem? For example, we can have a symbol “orange”. This symbol - or should I say these squiggles and squoggles - can also include the synaptic sequences of sensory input and motor output when you are interacting with an orange. Of course, these will be set and pre-determined before there are implemented in a machine or computer. That would imply that the machine/computer wouldn’t be able to ground new symbols.

    ReplyDelete
  35. "behavioral equivalence: if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used."

    How many different ways are there to do all that our brains can do ? I feel as though this applies to more specific problems, and actions, but intuitively, it seems difficult to think that there are many different and radically various ways to design one general way that can account for everything that the brain does.

    How do we know that it’s the right explanation? Is there a right explanation ? Is it possible that whatever we have created works but it’s based off the wrong explanation?

    ReplyDelete
  36. In “What is Computation,” Turing argues that “an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance as living tissue or a soul.” He believes that it is a behavioral phenomenon - moreover, a computational phenomenon. In the imitation game, for example, guests hide in different rooms while other guests try to guess which is which by producing written questions and receiving written answers. The goal is for the hiders to fool the rest of the guests. In this case, it is assumed that the people in the rooms are “trying to act behaviorally equivalent to one another.” If a computer were to fool humans into thinking it were human, then it would also be considered intelligent, since he argues that an entity is intelligent because it behaves intelligently, not because it’s necessarily human. After all, intelligence is the ability to acquire and apply knowledge and skills.

    Now in relation to cognition, Turing most likely argues that intelligence equates to cognition. I think it depends on what type of entity you’re talking about. Machines are not able to experience feelings or emotions because they don’t have the ability to interact with their environment. If it were to accept sensory feedback, then perhaps it would be able to experience a wider range of emotions. One could argue, however, that since cognition is explaining the mechanism inside one’s head that gives one the capacity to learn and retain memories, computation can mean the same thing in a mathematical sense. But cognition has much more complexity. All in all, I think it’s difficult to say that computationalism is fully true.  Many machines will not understand the meaning of feeling an emotion because us as humans can barely understand the definition of a “meaning” in a microscopic view.

    ReplyDelete
  37. “In other words, intelligence is ultimately a behavioral phenomenon.11 That is, an entity is intelligent because it behaves intelligently, not because it’s composed of some special substance such as living tissue or a soul.12 Moreover, intelligence is a computational phenomenon, amenable to computational analysis.”

    I do not completely agree with Turing’s claim about intelligence and computation, although I do find it interesting.

    I agree that intelligence is a computational phenomenon and that there is an intelligent way of computing things. Computation encompasses all of the processes and things that are happening when an entity is thinking or doing something. I agree that there are in fact more efficient and “intelligent” ways of doing things.

    What I do not agree with is the fact that Turing states that intelligence has nothing to do with being human. I believe in order to truly be “intelligent” you need to possess some type of capacity to feel, process, understand and react to emotions and feelings. The way we feel, often has an impact on how and why we do (compute) certain things. Machines lack this emotional intelligence that often helps humans act appropriately and intelligently.

    The way that Turing describes this computational phenomenon helps explain the easy problem, but not the hard problem: explaining how and why we feel. I believe the way we feel impacts how and why we do things, which therefore impacts how we compute things.

    ReplyDelete
  38. “If it’s predictable, then it should be possible to write a computer program that predicts and simulates it, and indeed there are many computational models of different kinds of neurons. But if each individual neuron can be simulated computationally, then it should be possible in principle to simulate the whole brain by simulating the individual neurons and connecting the simulations together.
    While there’s a difference between “should be possible” and “is actually possible”, this is the general idea behind the claim that we can understand the brain computationally. Assuming it’s right, it has some important consequences. For one thing, if brains can be simulated by computers, then computers could be programmed to solve any problem brains can solve, simply by simulating the brain. But since we know that computers can’t solve certain problems (the uncomputable problems), that must mean that brains can’t solve the uncomputable problems either. In other words, there are fundamental limits to the possibility of human knowledge.”
    In that case, would studying the brain and how neurons are connected between each other useful? We seemed to reject this notion when talking about mirror neurons for example, but this question sparks my interest. I recently had a conversation with a neuroscientist studying AI that stated that one day we will be able to replicate the brain in its entirety and when that will happen, the simulated brain will have the same capacities as the real brain. This brings us back to the difference between a simulation and a real thing, the difference being that a simulation won’t have the qualitative properties of the real thing or the “ what it feels like” , but again , we can’t know because of the other minds problem.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...