Saturday, January 6, 2018

10c. Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer.

Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-BrainerArtificial Intelligence in Medicine 44(2): 83-89

Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just “functed.” But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test, which is to explain all of our performance capacity, but without explaining consciousness or incorporating it in any way in our functional explanation.




55 comments:

  1. “Feeling itself is not performance capacity. It is a correlate of performance capacity. The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too.”

    By now I’m starting to get a much fuller understanding of the issues surrounding the Hard Problem. As seen in the previous texts (10a & 10b), the idea that we can understand feeling by looking at the behavior, anatomy, and general biology of a person is now off the table—firstly because we feel what we feel, not simply believe it (as Bennett seems to argue), and secondly because it only gives us a correlation of feelings (or beliefs of feelings) with functions. This is inherently wrong, because functions should correlate with further functions, and feelings should be explained with respect to feelings themselves. Consequently, due to the Hard Problem, we can’t figure out how and why we feel, BUT this doesn’t prevent T3-T4 (and T5) from existing. This is because feeling doesn’t have much to do with the performance (as cognition has a lot to do with doing, as has been said in class). Therefore we can’t create feeling except from hope that it will be correlated with the functions of a T4 or T5. What I’m trying to understand here is whether or not Harnad implies that if we create a perfect copy of a human being (T5), we might stumble upon the conception of feeling as well, yet still without this giving us any information regarding the fundamental existence of feeling itself.

    ReplyDelete
    Replies
    1. Neither T3, T4 or T5 solves the hard problem of explaining how and why organisms feel rather than just do.

      But I don't know what you means by "functions should correlate with further functions, and feelings should be explained with respect to feelings themselves."

      What needs to be explained is the causal function of feeling, over and above whatever turns out to be the causal basis of doing. Dualism would have done the trick, if it had been true, but it's not. But, having explained doing (whether T3, T4 or T5), there do not seem to be enough causal degrees of freedom for a causal role for feeling.

      Delete
    2. I think to say that feeling doesn’t have much to do with performance is kind of misleading, when you say it that way it sounds like an argument for zombies – although that’s technically not provably definitively wrong so you do you. I think it might be better to rephrase as, feeling doesn’t have much to do with reverse engineering performance. As, the former is kind of the whole premise behind the authors stating: The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too”. Because the underpinning of that statement is the idea that, as Stevan says, we probably cannot have T3-T5 (heck, even T2) passing robots without feeling. The idea is that is speculated (strongly in this class) that if you manage to reverse engineer performance, all that doing, “then there might be some realistic hope that it actually has a phenomenology (i.e., feelings)”.

      Delete
  2. Re: What is/isn’t a robot
    I thought this section was interesting because it made me think about what I would consider a robot and what would constitute this. If we were to replace all of someone’s organs with synthetic organs, we would not consider this person a robot or question their ability to feel. Is this because the synthetic organs do not change our interpretation of the person’s experience of consciousness and feeling? This relates to the other mind’s problem and how it is the case we think others feel and think in the same way we do in the first place. If we were able to replace the person’s brain with a completely indistinguishable T4 robot brain, would they still not be considered a robot?

    ReplyDelete
    Replies
    1. A robot is an autonomous causal mechanism. Biological organisms are all robots. Because of the other-minds problem, I cannot tell whether any robot other than myself feels, whether T3, T4, T5... or t0.

      Delete
  3. “If we could explain what the causal advantages of feeling over functing were in those cases where our functions are felt (the 'why' question), and if we could specify the actual causal role that feeling plays in such cases (the 'how' question), then there would be scope for an attempt to incorporate that causal role in our robotic modeling.”
    I think this is an interesting proposition, but I am skeptical to weather or not this is true. Not only are some feelings inexplicable, everyone has a different experience of feeling and the role of feelings in their individual life. Even if fear, for example, can be reduced to a causal role and programmed into a machine, this does not replicate every person’s individual experience of fear.

    ReplyDelete
    Replies
    1. The hard problem is not to explain this feeling vs that one. It is to explain how and why sentient organisms feel anything at all.

      Delete
  4. "Feeling itself is not performance capacity. It is a correlate of performance capacity."

    If this is indeed the case, would an epiphenomenalist view of conscious be correct? Could feelings simply be a by-product of I/O pairings which serve some function other than affecting actions? If feeling has no bearing on performance, why does feeling require a causal mechanism other than interest? Could it not be the case that we could (theoretically) make a perfectly performing robot which does not feel at all? I wonder what the implications of such a robot would be?

    ReplyDelete
    Replies
    1. "Epiphenomenalism" explains absolutely nothing. It simply re-names that fact that we have no idea how or why organisms feel rather than just funct.

      And to be able to explain how and why there could (or could not) be a T3 zombie would be to solve the hard problem.

      Delete
  5. When I read the author’s commentary about the Spielberg film, his comments on what is it that makes something a robot as opposed to a real person reminded me of a study done on children’s knowledge of appearance vs. reality. In the study, the kids were told a story where either the insides of an animal were removed or the physical appearance of the animal had been modified and they were asked whether the animal had retained its original identity. What they found was that children overwhelmingly thought that when the insides were removed, the animal had changed identity whereas the physical appearance change did not alter its identity. I wonder then how these children were to react if we told them a story about a person getting artificial organs like the commentary talks about? Will they still endorse a change of identity belief or will they, like adults say that the person had not become a robot? Perhaps the difference lies in the fact that in the original study, the insides were removed whereas for the artificial organ story they were replaced and so they would indeed think as adults would. Children seem to be aware that what makes a person is something internal and when removed they are no longer what they were before just as we do but the question still remains what line must be crossed for us to no longer consider something with human like capabilities as human?

    ReplyDelete
    Replies
    1. But children, like adults, have trouble with the difference between inside the head and inside the mind. (And children are all dualists.)

      Delete
    2. This comment has been removed by the author.

      Delete
    3. This comment has been removed by the author.

      Delete


    4. Why do you think children are all dualists ? Where does this come from + the fact that all children are animists? + innate sense of biology and physics (we know not to run into a wall). It is precisely from the fact that we have felt states, thus furthermore explicits the difficulty of the hard problem because it is so intuitive to us that we have a supposed mind/body separation.

      Delete
  6. “We can't, and hence we shouldn't even bother to try.”
    While I'm sure we are right to say it’s a hard problem, I can’t agree with the overall sense of pessimism regarding ever being able to solve this problem. For one thing, pessimism (and optimism for that matter) in the colloquial sense that “it likely won’t be solved” is false. And it is false because it is basically fortune-telling – predicting the future. We simply don’t know whether we’ll solve the “hard” problem because that depends on knowing what knowledge exists in the future (and we can’t know that because if we did, we would necessarily have that knowledge now!) Secondly, feeling exists for a reason – as does everything. Saying we can’t figure it out or that it doesn’t have a reason elevates “feeling” to the realm of the supernatural which is de facto the realm of bad explanations. But just because something is soluble, it doesn’t make it easy or guarantee its solution to be discovered.
    In any case, it does seem true that, on its head, passing the TT isn’t grounds for having solved the other minds problem (i.e. you haven’t established that that being is feeling). Although, we also don’t know because we just don’t have a plausible theory of what consciousness (feeling) is and why it is yet – perhaps a future giant might.

    ReplyDelete
  7. This article solidifies the idea that AI is limited to its doing capacities. As was debunked in Turing’s argument of the Turing machine as an example of active consciousness, AI can only simulate feeling. AI performance is only correlated to consciousness. Because of this, AI is a bad model of consciousness and no empirical discipline is capable of explaining the how or why of feeling. If a Turing machine passes the Turing test, this only offers an explanation of the functional capacity of neurocircuitry, but feeling is inexplicable with this information or with any words for that matter. How do you explain feeling to someone? I don’t mean somatosensation or pain, I mean feeling as in emotions that are automatic. How do you explain anger or sadness to someone who has never felt that way? How do you know that your anger and sadness is the same as someone else’s anger and sadness? The answer is simple; you cannot.

    ReplyDelete
  8. It seems that, at best, we have to assume feelings have emerged much in the same way that spandrels do. They exist as a by-product of certain arrangements of matter and enhanced performance capacity, and we can’t seem to explain what adaptive advantage they confer. According to the text, we can't explain how or why it is we feel, and don't just "funct", so the epiphenomenalist view is the kind of (non)explanation we have to be satisfied with. However, I’m still not entirely convinced that the book is closed on adaptive advantages. Consider this question:

    “5. Will conscious systems perform better than unconscious systems? The question should have been the reverse: Will systems that can perform more and better be more likely to feel?”

    The reversely-worded question is an important and relevant one, but I think if we wanted to assess whether consciousness is adaptive, the original wording should be considered. Doesn’t it stand to reason that if the conscious beings outperform (in evolutionary terms) the unconscious ones, that consciousness does confer some sort of adaptive advantage? Although we might not be able to explain intuitively what the benefit is, the evidence would suggest that the benefit is there nonetheless. This weasels out of the question a bit though. The conclusion that consciousness helps us survive isn't particularly informative if we don't know how it contributes to our survival.

    All this doesn’t change any of the implications for robotics–they should continue to focus on I/O. Even if we can produce some reason for our feelings beyond mere exaptation, I don’t think that particularly helps us explain causally how they arise. Whatever functional reason we can find to explain the existence of our feelings undoubtedly could not come from robotics, because that would require our ability to reverse-engineer them. Not only that, but if we can't answer the question of "why" (as posed above) through robotics (and it seems that we can't, because we're still missing "how"), it's unclear how we would go about answering it at all.

    ReplyDelete
  9. According to the piece, the best chance of including consciousness in AI is to "scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too.” My brain is blown. If it’s (likely) impossible to create consciousness in a robot, HOW is it that we’re conscious (seeing as we’re just biological robots).

    Overall, I think it’s good that cognitive robotics is limited to performance capacity and doesn’t (can’t?) include feeling. If it included feeling we would be fraught with a lot of ethical issues when creating models, such as how to go about disposing of creations, or if we would grant them the same rights as humans (or relegate them to the limited rights we give non-human animals).

    ReplyDelete
    Replies
    1. We're not just biological robots. We're robots because we have autonomous sensorimotor systems with particular performance capacities, but what separates us from other robots is that we have the capacity, and in fact, do feel. How is that we're conscious (feeling) is just restating the hard problem. I think the ethical problem is similar to the clinical relevance problem people brought up with the Fodor article. In the case of clinical relevance, yeah studying the brain is useful. With cognitive robotics (that feel, if it's even possible) may have ethical constraints, but thats not an issue for us, right now. Nonetheless, your concerns are put to rest with this reading I hope! Harnad concludes that there's no point in trying to build feeling as a property into robots!

      Delete
  10. Regarding question 2 from the AAAI Symposium "Are AI systems useful for understanding consciousness:" if the answer is correctly no, as Harnad believes, what is the point of continuing the field in this direction, if we have already established that the limit of AI is simply explaining easy problem stuff? No matter how completely we answer the easy problem using AI, if we have established that we will never bridge the gap from functional sameness to absolute sameness, what is the point of continuing? Is there a belief that we might accidentally stumble upon the answer to the hard problem through AI?

    ReplyDelete
  11. I think that the point of advancing AI is not to explain consciousness and the hard problem as a whole, but as a way to make life easier. AI systems such as Siri and Alexa are built in order to help make life more simple for humans like if you get a text message while youre driving you can ask Siri to read it out to you and respond or you can ask her to call someone, or if you're at home you can ask Alexa to turn on music. These systems are too under determined to explain human consciousness, and will likely continue to be too underdetermined to explain much of anything of importance. Through building them we are forced to first learn how some things work (like syntax) but we really only grasp the basics as of yet.

    ReplyDelete
    Replies
    1. I agree completely with your point here. I said something similar in a skywriting for another article -- that the point of AI is definitely not the explanation of consciousness or even really the recreation of consciousness. Although much research is being done more and more in order to improve AI and make it more and more "lifelike," at the end of the day, like you said, its a concept that has one end goal -- to improve human life. Almost all the AI initiatives nowadays are fueled by some agenda or some technological advancement so that we can improve our species and lives as humans in general. You're right that by building them we might learn how some human actions work, but even then, we're mostly getting the "what" or what these things are and where they happen and what neural response or action is responsible for it. There is no why or how.

      Delete
  12. “Because we cannot explain how feeling and its neural correlates are the same thing; and even less can we explain why adaptive functions are accompanied by feelings at all. Indeed, it is the "why" that is the real problem.”

    When addressing the hard problem (i.e., why and how organisms feel), I have always been transfixed on the ‘how’; namely, how we can deduce a causal mechanism that generates feeling. This section of the article, however, really placed an emphasis for me on the ‘why’ of the hard problem – that is: Why have we adapted to feel? What is the adaptive role of feelings? Why is it adaptive for functions to be not just ‘functed’ but felt? While these questions of why are not new, I personally feel that I am only now beginning to explore the ‘why’ and see the value in exploring this additional line of inquiry.

    Moreover, this leads me to ask further questions on the why / the adaptive value of feeling... What if we had evolved as we did, just without feeling? If we were all Zombies (i.e., we could do as we do just without the subjective quality of feeling), how would we live differently? Would anything be different?

    ReplyDelete
    Replies
    1. The biggest way that I can imagine our lives changing if we were all zombies is our motivation. For example, how many times have you done (or not done) something just because you wanted to? Since zombies basically have the same properties as we do, but without the feelings if we were to be zombies I think our lives would be very different since we would lose what makes us as human beings unique- namely the real reason we do things.

      Delete
    2. Much like Laura, I have expended most of my energy this far focusing on the 'how' of the hard problem. Moreover, I never fully grasped the difference between how/why and instead linked them together -- assuming them to be more of a 'grouped' problem that we were trying to answer. This article marked a turning point in that it illustrated the importance of separating 'how' from 'why' and treating them as two markedly different and equally important divisions of the hard problem. As Harnad states:

      "We are as ready to accept that the brain correlates of feeling are the feelings,  as we are that other people feel. But what we cannot explain is why: Why are some adaptive functions felt? And what is the causal role -- the adaptive, functional advantage -- of the fact that those functions are felt rather than just functed?"

      This is crucial because it shows that not only must we consider 'how' we feel in order to hopefully one day achieve a reverse engineered T3 passing robot, but also how crucial the 'why' is. In order to answer one, we must in a sense also answer the other, and this article made this notion explicitly clear.

      Delete
  13. "But what the film misses completely is that, if the robot-boy really can feel... then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism...is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter."

    Phillip Wollen's TEDxMelbourne talk was very interesting, and I think the main argument is summed up nicely by the mentioned quote in the Harnad paper, albeit in a different context. Harnad's article refers to differential treatment between a human boy and a robot boy, while Wollen's talk discusses the mistreatment of animals because they are different from us. Despite the Other Minds Problem, we can infer that other humans feel because when I kick someone, I see them react in a way similar to how I would. Along the same vein, animals react negatively when they suffer, providing us with evidence that they can feel. Therefore, we can say that mistreating animals is racism (according to the definition defined by Harnad), or perhaps better put as speciesism.

    ReplyDelete
  14. Re: Xenophobia
    “The film depicts how, whatever the difference is, our attitude to it is rather like racism or xenophobia… But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker's premise that he can), then mistreating him is not just like racism, it is racism”

    Our differential treatment for robots stems from the same logic behind racism – we treat things that appear different from us differently; just as a darker skinned boy appears different from a lighter skinned boy, a robot boy appears different from a human boy and hence, we are less inclined to ascribe feeling to them and thus more readily mistreat them. This concept can be extended to animals, who are different to us and thus once again are more subject to torture. Or as seen in the Holocaust, the Jews, who were viewed by the Nazis as animals, were put in a category even farther from the selves of the Nazis and from being human overall, and thus too were more readily subject to cruelty and abuse.

    But, if we morally object the mistreatment of humans/those like us – because they FEEL – then by this logic, we should also object to the mistreatment of an animal or robot or even different-looking human who too feels.

    This relates back to section on The Other Mind’s Problem: “Although, because of the "other-minds" problem, it is impossible to know for sure that anyone else but myself feels, that uncertainty shrinks to almost zero when it comes to real people, who look and act exactly as I do.” Just as we have no way to know if any entity, aside from our own self, feels, we most easily ascribe feeling/consciousness to entities that are most similar to us (humans). As entities differ – whether it be an animal or a robot-boy (or at the time of the Holocaust: a Jew) – we have a harder time acknowledging the possibility that they too feel and hence we (wrongly) more readily mistreat and abuse such entities.

    ReplyDelete
  15. “The research of Libet (1985) and others on the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move. So it is not only that the feeling of agency is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects.”
    This especially interesting as it is similar to the fear we feel during a fight or flight response. Originally it was thought that the feeling of fear when confronted with a dangerous stimulus caused the fight or flight response, but further research has shown it is more likely the heightened state of arousal caused by the fight or flight response that comes to be identified as fear. So the functional response comes first, and the feeling comes later. This would be interesting as it feels instinctive to think we initiate our actions and own feelings, but do we really? Malebranche once offered another possible third party to our own actions saying that God willed us to move and we couldn’t move if he didn’t will it. While this seems completely false now, and for good reason, this article calls into question again whether we really do initiate our own actions and feelings.

    ReplyDelete
  16. “Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.”

    The way that David (the robot child programmed to love) is treated in the video posted above shows that even though a T3-passing Turing machine is indistinguishable from humans behaviourally, humans will still harm them because they’re different. Of course in the case of robots, they don’t feel and so we are not causing them suffering when we mistreat them. However, the behaviours and attitudes towards AI in the film highlight the problem that humans are willing to mistreat others because of perceived differences.

    Why does the Other-Minds problem matter? As Stevan stated in class, this problem is actually one of the other-minds when humans get it wrong. We are mistreating animals because we can see that there are differences between us and them, and so we brush aside their feelings. They are different than the robots depicted in the film because they can feel and are thus conscious. That’s what matters. It is important for us to know that despite the differences in physical appearance (and the fact that animals can’t talk so they don’t have a voice for themselves), they feel the same way that we do, and so they experience suffering the same way we do as well.

    ReplyDelete
  17. RE: Correlation and Causation.
    In the part about correlation and causation, it talks about the example of nociception (i.e. perception of pain stimulus). To quickly sum it up, this paragraph describes that pain sensations are “adaptive functions” that are felt (indeed we feel pain), but that could technically be done without feeling also since robots can also detect these nociceptive stimuli and refrain from it (without necessarily feeling them). I agree with that: I do not understand what is the function of feeling in general. However, could this be a route to investigate if we want to try and explain feeling once and for all? Personally, I believe so. I think pain sensations should be among the first things cognitive scientists should focus on when trying to resolve the hard problem which is how and why we feel. Pain sensations are interesting in this topic because they are both function and feeling. Although we have no reported case of a human lacking the capacity to feel entirely, we have many examples of people who lack the capacity to feel pain. On the contrary, we also have examples of people who feel pain without having the stimulation (phantom limb). I think there is something to do with that! Maybe cognitive scientists are wrong in trying to explain feeling altogether and should try to start by explaining sub-parts of the problem before addressing the hard problem.

    ReplyDelete
    Replies
    1. I disagree in some sense. Pain is more of a reaction to a stimulus rather than a conscious feeling. I think what you have to consider more than the pain you are feeling is more the conscious feeling that you are feeling pain. I think this aspect of feeling pain is not the solution, it goes beyond this.

      Delete
  18. One could compare the racism alluded to in this clip to what humans do to animals all the time. We essentially assume they dont have feeling or that they are beneath us because we are human and they are not. Same can be said with the AI in this video, how the individuals at this carnival event assume that the AI dont feel or do not bear the same worth as humans because they are not human, regardless of the robot feeling fear.

    ReplyDelete
  19. “And consciousness in today's AI and robotic (and neural) models is purely decorative, not functional.”
    I completely agree with the idea that what goes on in AI’s minds is not the same as human consciousness. From what I’ve seen these AI machines are only able to hold a conversation on certain subjects. Although I’m not an expert on AI this leads me to believe that the creators have only grounded or maybe simply programmed the knowledge to talk about certain topics, but this is not how a humans brain works. In order for AI to resemble us they would need to be able to learn from certain situations like we do. My question is would AI need to have the ability to feel to learn and properly integrate information into their memory? Presuming they could simply add certain definitions and rules to their memory would they still need to be able to feel in order to properly integrate those facts into their functioning memory (since without the emotions they would just be cold hard facts)?

    ReplyDelete
  20. In this paper, the author tries to argue the non-existence of the unconscious mind, stating that it is merely a performance capacity of ourselves but cannot be attributed as a unfelt feelings, since the phrase itself is self-contradictory. I am wondering if the phenomenon of priming can also be explained by it. Our sensorimotor system is enable to perceive the stimuli without us consciously feeling it, and the stimuli in turn exert impact on our behaviours. The stimuli remain on the behavioural level, proving that our feelings do not have causal role in our behaviours. Feelings can be seen as a mere agency for us to gain a sense of self and volition, without any causal roles in behavioural outcomes.
    Meanwhile, the paper argues the equivalence between consciousness and feelings. However, in current cognitive neuroscience literature, people tend to dissect consciousness into different components, such as attention, alertness, arguing that attention is something different from consciousness, but has overlapping functions. Something can happen in your consciousness but outside of your attention. I am wondering if we look at this distinction from the scope of the paper, then it becomes the mere difference between felt state and behavioural capacity again. Since there would be no such a thing as a unfelt feelings.

    ReplyDelete
  21. Here highlighted once again, the problem is not which feeling, or the uncertainty of the existence of feeling (other-minds problem) - the problem we need to solve is "with the causal role of feeling in generating (and hence explaining) performance, and performance capacity." The "readiness potential" detailing that a brain process that precedes voluntary movement may occur before one feels the willingness to move struck me as an impressive idea. We may not have as much agency as we believe we do if our neural circuitry is functioning without our knowledge or control. But since agency and intention are only feelings, and feelings cannot be a cause of performance/doing, this is irrelevant anyways, because intention was never the cause of the movement in the first place.

    In the paper, a question is proposed: "And what is the causal role -- the adaptive, functional advantage -- of the fact that those functions are felt rather than just functed?"
    I can give one attempt at an answer to this. I think if faced with a threat to your survival, feelings are necessary. For example, if an enemy was after you, and you knew why that person was your enemy, you would feel anger towards them (maybe because you have felt pain in the past) and know to stay away, because that person may inflict harm upon you. In the most primitive, evolutionary sense, feelings help you learn and adapt to your environment.

    ReplyDelete
    Replies
    1. One thought I would add to your example is that when in a 'fight or flight' situation, I don't believe you have enough time to process the feeling of fear/anger - your response is more of an instinct. Is an instinct such as this one that results in an immediate response also be considered a feeling?

      Delete
  22. Here Harnad and Scherzer break down the hard problem. I think my immediate reaction to this might be a bit of relief that an iRobot-esque disaster does not seem to be a realistic reality in our forseeable future (or ever). I do think there is an interesting question here about states of feeling. If we can build robots to perform as lesser organisms do up and keep improving designs until they do in fact pass the Turing test at what point would we have to begin worrying about them having feeling? Only when they pass the Turing test as Harnad suggests? Wouldn’t this call into question animal consciousness if we do not test at each level? But even if we did is there a measurable advantage to trying to build a robotic dog or should we just shoot for a robotic human capable of passing the Turing test at t3? I think I am playing devils advocate here. I truly cannot conceive why something programmed would at some point develop feelings on its own. Then again if it was designed using machine learning I suppose that could introduce some interesting questions into this example i.e., an AI building a more intelligent AI, which builds a more intelligent AI and so on. At some point would one AI design the next AI to have feeling? I can't see why it would, unless there truly is an evolutionary advantage to having feeling.

    ReplyDelete
    Replies
    1. I think your question has been brought up in class before, and the answer to why we don’t try to build a T3 passing worm first is that even though humans seem a lot more complicated, you could communicate with the human robot and ask it questions, vs. a dog or a worm (I use worm to illustrate the inaccessibility of a robot that’s not modelled after humans). I feel like we’d also be a lot quicker to pick up on what’s unacceptable behaviour for a human than we would for a dog or a worm. A human robot that doesn’t pass T3 does not imply that it passes T3 for a chicken or a worm, so I personally am not concerned about what it means for animal consciousness if feelings for a human robot only arise after a certain level of complexity. Anyway, I think we’re kind of more interested in HUMAN capacities of intelligence etc. than those of animals (sorry Harnad) – why explain being able to hunt for food when you can explain hunting AND abstract mathematical thinking?
      “I truly cannot conceive why something programmed would at some point develop feelings on its own” As stated in the article, aren’t we all just programmed by our DNA? Your question could also apply to humans as well. Why would a bunch of cells develop feelings? You also asked about an unfeeling AI designing a new iteration of AI to have feeling. Are you presenting a hypothetical situation in which humans fail at creating a T3 passing robot (i.e. it feels), but create something that can do it for us? Purposefully ‘putting feelings’ into a robot implies that you’ve discovered the causal role of feelings (how they arise), so your hypothetical AI in the 2nd last sentence will have solved the hard problem. I’m more persuaded by the idea that if we create a T3/4 passing robot, although we won’t have explained feelings, we nonetheless can’t assume the robot isn’t experiencing them.

      Delete
  23. There was a level of confusion till now why nociceptive function could not be at least the how answer to feeling, but then through this article and throughout the class I realized nociceptive function is the answer to how do we get pain, and pain (and by the same extension nociceptive function) just happens to be felt...but it did not have to be constructed that way by the Blind Watchmaker. When you compared it to thinking and understanding it became even more clear, that we could have had an ability to avoid injuries or communicate with each other through language, much like a chat bot, or a full robot can communicate with me and avoid hardware damage through sensors and the likes, but not requiring the need for feeling. Thus, the glaring question remains, why/how are we even sentient beings? The feeling vs functing dichotomy was a great help. Furthermore, saying that feeling is not a performance capacity but a correlate of PC, further enforced the feeling vs doing confusion that we've come across in previous articles. When people understand that the doing is not the feeling, it allows for the conclusion that the hard problem is likely insoluble.

    I also found the idea that removing organs from someone and replacing them with synthetic ones would not make a person a robot. Essentially, person X can undergo surgery to have all his major organs replaced by synthetic ones, one would still expect that person X remains person X and does not become a robot because they still feel like person X.

    ReplyDelete
  24. "It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs."

    The critique of Spielberg's AI really resonated with me; how easy is it at this point to make a film that acts as an analogy for racism or colonialism or class wars or the violence of othering? This trope is redone over and over again in a seemingly unending fashion.

    It truly would have been fascinating if Spielberg had used this opportunity to think about the other minds problem. I can see how this would be challenging, as making up answers to the questions of the hard problem seems impossible. Nonetheless, mobilizing discussion surrounding the "feeling of other systems" would have been exciting.

    In particular, if one is indistinguishable from us and our thought/feeling process, is that not sufficient to ascribe true feeling to that being? What would we need to find to render that type of robot 'feelingless'? The boy in AI seems to assume the role of what we would call 'the zombie' - if he indeed has everything we have except feeling. The movie could have truly illuminated the hard problem and made the struggle of the hard problem more accessible.

    ReplyDelete
  25. When you say some robots don't feel, how would you explain a robot that is designed to "feel" between different patterns of wood or "feel" when someone is close to them? I think of robots used in manufacturing that are able to distinguish between different materials and can work along a human. I am interested to see how you would define such a machine and whether their particularity is not feeling at all but something else?

    ReplyDelete
    Replies
    1. I don’t think Harnad is saying that robots don’t feel (I mean robots TODAY don’t feel, but the hypothetical T3+ robots we’ve been talking about could feel). He’s stated before that ‘Stevan says’ that although we have no way of knowing whether something that passes T3/T4 is feeling, he has no doubt that it IS feeling. I wouldn’t kick you even if I found out you were created in an MIT basement lab 4 months ago. You would cry out in pain and probably hold resentment against me just like any normal human would. When you say a robot is designed to feel the difference between wood patterns, I don’t think it has anything to do with the fact that for the robot it feels like it’s looking at a different texture, but rather it has to do with senses- what Harnad calls functing. (From my understanding you were confusing feeling with functing). As modern computation/robotics have shown, a lot (but not all) of our performance capacities/functions can be accomplished without feeling anything. Take the wood pattern for example; using a database of properties from different wood cuts, it could extrapolate visual patterns like tree rings, colour, and grain to correctly identify the wood it’s looking at. This doesn’t mean it FEELS like it’s looking at oak wood vs. pine wood, or that it FEELS like it’s trying to remember its training and correctly identify the wood. It’s just input/output. This is why the hard problem is so interesting in my opinion. If these things COULD be done without feeling them, why did feelings come into the picture? We don’t need to explain consciousness at ALL to theoretically build something that does everything a human can do, which is fascinating because feelings are so central to our identity as humans/living organisms.

      Delete
  26. “Although, because of the "other-minds" problem, it is impossible to know for sure that anyone else but myself feels, that uncertainty shrinks to almost zero when it comes to real people, who look and act exactly as I do. And although the uncertainty grows somewhat with animals as they become more and more unlike me (and especially with one-celled creatures and plants), it is very likely that all vertebrates, and probably invertebrates too, feel.”

    I’m a little confused on the reasoning behind the claim that vertebrates and invertebrates feel – if we cannot know for sure if other humans can feel (due to the other-minds problem), then how can we suggest that vertebrates/invertebrates feel? Apart from that, I found this article very interesting; it clearly laid out the different problems in understanding the causal mechanism of feeling. I especially found interesting the relation of feeling to the other four known forces in the universe (electromagnetism, gravitation, and the strong and weak subatomic forces), and how these four forces are unfelt which poses an even grander problem in understanding the ‘force’ of feeling.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. I found the comparison between functing and feeling interesting and helped clear up a few things for me. I think the answer to try and figure out why we have these two classes is to understand under which conditions we funct and which we feel. An example of a functing is being asleep (no one knows how it "feels" to be asleep). On the other hand instead of having just nutritional deficiencies we also have a feeling of hunger when we need to eat to fix those deficiencies. The difference that I've noticed is that feeling seems to arise as a motivation and feedback mechanism when needed in consciousness (I feel hungry, so I eat and then I feel satisfied). Thus feeling might have an adaptive purpose. Then again I can't know if what I'm saying has any bearing at all, it's impossible for me to understand how different (or similar) it would be to survive and adapt (without feeling). Until we build T3 and up no one will be able to see the effect of non-feeling on survival (and even then we can't be sure if there is non-feeling, because of the other minds problem).

    ReplyDelete
    Replies
    1. The problem with this is that we're notoriously bad at interpreting causality for our own actions. If you ask someone why they took their hand off a hot stove, they'll most likely tell you that it's because it felt hot. This is untrue however, the reality of the situation is that the reflex that caused you to pull your hand away from the stove occurred entirely in the spinal cord before the pain signals even reached your brain. So while it may feel like the feeling caused the action, it seems as if the feeling of pain had no causal role in this scenario. This is true for many other scenarios, we act before we're aware of the action, and then try to explain our own behavior by claiming we felt it. Because of this, it's hard to claim that the feeling is adaptive, it seems like the stimuli would've caused the same result even if we couldn't feel it.

      Delete
  29. “And know-how -- sensorimotor skill -- can come in conscious and nonconscious (i.e., felt and unfelt) form: When we do something (consciously), it feels like something to do it; if it did not feel like that, we would feel shocked.”

    I agree with this comment and the notion that consciousness can essentially be equated to feeling. In addition, it makes sense to me that there is really no complement to feeling, i.e. “not feeling”, because everything we do involves feeling.

    From this comment, I understand conscious form in sensorimotor skill as doing something that you are actively aware of, for instance, learning how to hit a tennis ball with a racket and nonconscious form in sensorimotor skill as doing something without necessarily realizing that you are doing it, like dreaming or clenching your fist in a coma. However, I was wondering what happens with actions that are somewhere in between. For example, if one drives for many years, often the basal ganglia procedural learning will take over, so you are no longer actively aware of yourself driving. Of course you realize that you are on the road and behind the wheel, but there is a certain amount of detachedness that occurs between your body and your mind. Does that automatically fit in with conscious form or is it something that exists between the two?

    ReplyDelete
    Replies
    1. This is a really interesting question that you've formed. However, I still believe that in this example you gave, although it is procedural learning taking over, you are still conscious and you still feel that you are driving. I think the detachedness that you feel is more within the act of driving than being consciously aware that you are driving. It is more with the motor actions rather than the conscious feeling which in this case would be different.

      Delete
  30. “To be conscious of something means to be aware of something, which in turn means to feel something. Hence consciousness is feeling, no more, no less.”

    Harnad and Scherzer state that consciousness is synonymous to “feeling”. As many cognitive scientists discuss consciousness in their papers without properly explaining what they mean by it, this is an important and useful clarification.

    Turing’s method to reverse-engineer our performance capacities focuses on doing, ignoring feeling. Cognition has to do with both doing and feeling, therefore in order to reverse-engineer a cognizer, we would have to be able to reverse-engineer feeling too. However, we currently have no clue as to why or how we feel (the hard problem), and we might never be able to answer these questions. This is why Harnad and Scherzer argue that practically speaking, cognitive scientists should first focus on creating the robotic version of the Turing Test, which would explain our performance capacities. Once we understand those better, perhaps we will be closer to understanding our feelings.

    However, Harnad and Scherzer are sceptical about this. I also feel that even if we understood perfectly how and why we do the things we do, we would not gain any knowledge on how and why we feel, because we could very well imagine beings who could act the way we do but not feel the way we do (zombies).

    ReplyDelete
  31. A huge oversight in many of the papers we've read for this class has been their avoidance of succinct, detailed definitions. I appreciate the clear cut edges of each construct presented in this article. On that note, while I completely agree that consciousness is feeling, I wonder if there is a dimensional spectrum instead of a binary either-or scenario. That is, are things capable of more feelings (sensorimotor feelings vs the full range of human feelings and emotions) thusly MORE conscious?

    The mind matter problem was also very interesting to me. From my understanding, the mind matter problem makes a core assumption that feeling is inside the body, or due to some physiological result of the body. However, is it not possible that what causes feeling is external to that, the same way the force of gravity impacts us while being outside of us? Some advances in neuroanatomy may suggest that parts of "feeling" occur in the brain. For example, the pathways of sensorimotor touch and pain have been identified, running through the spinal cord and brain. Additionally, using correlational studies evidence supports emotions to be active in the amygdala. However, this research cannot prove that the brain causes these feelings, it may be that these processes are merely what occur in reaction to the fifth feeling force. I also (as a supporter of determinism) like the proposition that feeling may be a fifth external force. Though I understand that it can limit people's feeling of free will, I don’t think agency and determinism cannot coexist. Just because a process is predetermined doesn’t mean that the choices we make are not our own.

    ReplyDelete
  32. Where I really get hung up still is the relationship between these unfelt forces and the feelings of them. What is happening to generate feeling from unfelt? Is it a akin to parallelism or more like emergent interactionism? But then if it is emergent interactionism- the idea that cognitions, the mind, emerge from the brain how could this possibly happen? How can these tangible mechanistic processes generate feeling?

    is the idea that we will build a robot with all of our functional capacities will feeling just poof into the robot? or does feeling need to be built too?

    ReplyDelete
  33. This comment has been removed by the author.

    ReplyDelete
  34. "Unconscious knowing makes no more sense than unfelt feeling. Indeed it's the same thing. And unconscious know-how is merely performance capacity, not unconscious 'know-that.’”

    I am torn on how I feel about the idea that one cannot know unconsciously. On one hand, I agree that one cannot know unconsciously because it is like trying to convince yourself of a belief for which you know to be false. Attempting to believe something means steering around contradicting evidence and seeking out evidence that supports the desired belief, and in doing so, you are conscious of some knowledge you are attempting to suppress. Else, you wouldn’t be able to pick and choose evidence. In the same way that you cannot decide to believe because you already know what is true, you cannot know unconsciously because knowledge is truth (at least as far as what is true within the world of an individual). On the other hand, I would argue that you can know unconsciously, because to be conscious of all knowledge would be overwhelming. Some knowledge functions unconsciously to help us perform actions without internal deliberation. If knowledge were all conscious, then every action, decision and thought process would be overburdened with a lifetime’s-worth of competing information and it would be impossible to cognize or accomplish anything at all.

    ReplyDelete
  35. “The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film's premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs…An opportunity to build some real intelligence (and feeling) into a movie, missed.”
    I like this quote, because what I believe it implies (and am a little more reassured of the implication after looking at the TedX video). Yes, the author is criticizing the inaccurate (or unintelligent) portrayal of potential AIs but it would seem as though they are also making a more ethical argument as well – well, maybe I’m projecting that onto the writing based on my own bias and bits of class discussion that interested me! I like that as a whole it sounds like to miss the opportunity to explore “whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make” is to miss the opportunity to reflect on the ethics of what we do with **seemingly** feeling robots. I’m thinking largely on the in class discussion where Stevan talked about the meaning of life being contingent.

    ReplyDelete
  36. "Explanatory task of biology is to reverse-engineer what evolution built, in order to explain how it works: functionally, causally. Often this requires building real or virtual models to test whether or not causal actually work"

    I suspect that this approach to the explanatory task of cognitive science is partially responsible for the "zombic hunch" that Dennett describes in his paper, The Fantasy of First-Person Science. The "zombic hunch" is described by Dennett as an inexplicable intuition that some philosophers have that designing a robot with thoughts, learned form experience and used what it learned lacked a certain "je ne sais quoi" in answering Kant's question about thought.

    Could the inexplicable shortcoming, the "je ne sais quoi" that is missing from Turing statement about robotics be the lack of reverse engineering "experience" or as Harnad put it, "feeling" being put into the robots design?

    Certainly heterophenomenology is not a virtual model that tests causality, and therefore fails to adequately reverse-engineer what evolution built. Instead, it acts more as a report of the processes we already know to be happening within the human psyche, without giving any explanation to why or how they occur.

    2. "Are AI systems useful for understanding consciousness?
    i. Not at all, They are useful only inasmuch as they help explain performance capacity."

    I'll challenge this response in the following way. Even if you believe that AI can not and will not ever be able to model consciousness, is their inability to do so not informative in its own right? If AI was able to model consciousness, would consciousness not have to be an emergent property of performance capacity, for example? Since AI cannot model consciousness, we can say with certainty what consciousness is not. Normally, this wouldn't be helpful, but when we are navigating a subject as abstract as consciousness, any concrete characteristics are welcome, even characteristics that we can prove that consciousness does not possess.

    "Will systems that can perform more and better be more likely to feel?"

    Though the authors respond to this with a guarded "yes", I wonder how they would argue this is different than the problem in evolutionary psychology about there being no added benefit for an organism to be doing things consciously rather than unconsciously.

    If "consciousness" is simple "feeling" as Harnad asserts, then the question of "whether systems that can perform more and better be more likely to feel" can be easily mapped onto the question of "is there an evolutionary advantage to consciousness" can it not? Why then is the answer to feeling "yes" while the answer to consciousness remains "no".

    I also think it’s a bit of a fallacy to use scaling up the animal kingdom as a way to argue this point. I think it would be more appropriate to imagine a unfeeling zombie that nonetheless had all of our other cognitive capacities and then investigate the potential added benefit of feeling.



    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...