Saturday, January 6, 2018

10a. Dennett, D. (unpublished) The fantasy of first-person science


Extra optional readings:
Harnad, S. (2011) Minds, Brains and TuringConsciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014


Dennett, D. (unpublished) The fantasy of first-person science
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."
Click here -->Dan Dennett's Video

Week 10 overview:





and also this (from week 10 of the very first year this course was given, 2011): 

62 comments:

  1. I’m finding it slightly difficult to really figure out what’s right and wrong here (and even what I think). On the one hand I wonder if it is possible in some sense to collapse the two teams? As in, does doing and feeling somehow depend on/relate/interact with one another. For example, you may need feeling to do everything I can do. Or something like, the moment when our T3 passing robot is properly constructed, it is able to do and feel in all the same ways we can at the same time (the light gets switched on and it all starts to work).
    Also, when Dennett describes Team B as thinking that there is some “extra spark, some extra ingredient that sets [people] apart from all non-living stuff” it sounds like a “souls exist” thing which, as Dennett rightly points out, is a bad argument. But this is clearly a misrepresentation of what Team B is saying. They aren’t arguing for souls, just that feeling actually exists. The denial of souls isn’t the same thing as the denial of felt states (for one thing, you can’t really avoid the latter). My feeling isn’t just some mistaken belief, Dennett!
    But then, while I do generally think that doing and feeling are qualitatively different, I also can’t get on board with Chalmers’ zombie thing. I just can’t wrap my head around how a zombie can (theoretically) exist in any relevant way. Besides how can anyone seriously suggest that some zombie, whose molecules are matched to my own and can do everything I can do, be *just* a zombie in the first place? It’s not explicitly relevant to our course, but it does bring up some scary ethical implications. What more would you need if not that to think it’s probably a bad idea to mistreat such a being? Because if it really was a zombie, it wouldn’t matter how we treated it – it would feel no more than a rock.
    It’s all a lot to handle.

    ReplyDelete
    Replies
    1. Yes, the had problem is about feeling.

      But, yes, feeling and the hard problem are also the origin of the notion of an immortal, immaterial soul, reincarnation, after-life, paradise, etc. etc.

      Doing and feeling ar of course related, and cofrrelated. (After all, it's the brain that causes both of them. But to explain how and why it causes doing is the "easy" problem; to explain how and why it causes feeling is the hard problem.)

      Of course the notion of a T5 zombie is absurd and not even worth thinking about. But that is not an explanation of how and why there can't be one. And as we move down to T4 and T3 the challenge becomes ever greater, for the impossibility of a T3 zombie (Isaure) is perhaps a tiny bit less absurd than a T5 zombie -- and the other-minds problem prevents us from knowing in either case. Even more challenging is the likelihood that many species (like bacteria, plants, sponges, and other organisms without nervous systems) really are zombies.

      Delete
  2. 1. “First of all, remember that heterophenomenology gives you much more data than just a subject’s verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data.”
    2. “On the one hand, if some of your conscious experiences occur unbeknownst to you (if they are experiences about which you have no beliefs, and hence can make no "verbal judgments"), then they are just as inaccessible to your first-person point of view as they are to heterophenomenology.”
    3. “This is a fascinating and surprising phenomenon, predicted from the 3rd-person point of view, and eminently studiable via heterophenomenology.”

    Heterophenomenlogy claims to be a more plausible solution to the Hard Problem than the ‘first-person point of view’ because it takes on the perspective of the natural sciences and can thus explain ‘scientifically’ the existence and manifestation of our feelings. Though this point of view is tempting, I don’t think Dennett offers a very credible case for heterophenomenology: it gives correlations, explications, and predictions (see quote 3.), such as “she is going to be sad” but doesn’t address the root of the question, i.e. “what is it that makes her able to be sad?”. Through the observation of ‘every blush, hesitation, and frown’ (quote 1.) and so on, we are able to create correlative observations, which still fail to answer whether we have a mind, and why, fundamentally, do we feel? If a T4 says that it feels, do we believe that it feels? There’s no veritable way of knowing. Dennett even somewhat admits that conscious experiences are as inaccessible to heterophenomenology as they are to first-person point of view (see quote 2.) The point here is that first-person point of view is aware of that. Please correct me if I’m wrong, but heterophenomenology merely scratches the surface of the question, while pretending to address it up front, which is what makes it less plausible. Its opposing view as depicted in this article admits that we cannot find a solution to this problem--at least not as simply as Dennett presents it.

    ReplyDelete
    Replies
    1. Heterophenomenology is like weather forecasting: It predicts but it doesn't explain. (And it's really just T4.)

      Delete
  3. Dennett argues for the validity of using heterophenomenology as a means of studying consciousness because it is a methodology that captures scientifically standard data (3rd person data). I think that he makes a good point in refuting that it’s impossible to capture 1st person data for our own internal conscious states because to describe these states, we would have to give verbal reports of them, which then becomes 3rd person data anyway.

    However, I think it should be noted that the heterophenomenologist is still just observing the behaviour of a person. The quote, “[h]eterophenomenology gives you much more data than just a subject’s verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data” only highlights this. Since this method is only acting and concluding upon data of human behaviour, it still fails to crack the problem that we cannot observe the actual mind. In addition to this, heterophenomenology can only give you the correlations of human consciousness . If you were to get someone’s expressions of their beliefs about their consciousness, their verbal reports would tell you what/where/when they had these beliefs. If you were to probe them further and ask about why they have these beliefs and how they have come to conclude upon these gut intuitions about their beliefs, I suspect that they might not have an answer. From only having answers to where/where, the data from heterophenomenology can only settle on explaining human consciousness through correlational relations, not causal ones. The hard problem of how and why we feel would still go unanswered.

    ReplyDelete
    Replies
    1. (second part of SW)
      Molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely . . . he is embedded in an identical environment. He will certainly be identical to me functionally; he will be processing the same sort of information, reacting in a similar way to inputs, with his internal configurations being modified appropriately and with indistinguishable behavior resulting. . . . he will be awake, able to report the contents of his internal states, able to focus attention in various places and so on. It is just that none of this functioning will be accompanied by any real conscious experience. There will be no phenomenal feel. There is nothing it is like to be a Zombie…

      Since heterophenomenology restricts itself to verbal reports, I think that it would face a problem when explaining the Zombie Hunch. The zombie that Chalmer describes is a T5-passing Turing Machine. If the zombie is able to exhibit the same behaviour and a human, how can heterophenomenology differentiate between the two? Both the human and the zombie have convictions of the direct evidence of their own consciousness. This poses a problem as we still haven’t found a way to distinguish what has a mind and what behaves like it has a mind. I think that if we were able to solve the Hard Problem, then we would be able to differentiate the two. The problem is that it seems impossible from the knowledge that we have now. I think we mentioned at the beginning of the course that dualism would be an excellent solution to the Hard Problem, but it is unjustified because we haven’t scientifically proven that there is another force in the universe that can explain dualism.

      Can science really be the answer to solving the Hard Problem? In Frank Jackson’s, Epiphenomenal Qualia, he brings forward the “knowledge argument” to prove that (weak) physicalism -- that everything in the universe can be explained by science -- is false. To summarize: prior to seeing the colour red, I learn about the science behind how my visual system comes to interpret red. However, when I do see the colour red for the first time, I learn something new. That is, I learn the feeling of experiencing the colour red. If science were to explain everything, then I shouldn’t learn anything upon my first experience of seeing red. I bring this argument up because maybe we cannot turn to science to help us figure out the answer to the Hard Problem.

      Delete
    2. Just like T4 brain correlations of feeling, heterophenomenology is just T3/T4 verbal and other behavioral correlations with feeling. It does not and cannot explain feelings causally.

      But the hard problem is not finding out how and why we come to have beliefs about feelings, but how and why we come to have feelings.

      The other-minds problem continues to prevent us from knowing whether anything feels, including T3, T4, T5. But even if we knew, it would not explain how or why it feels.

      (Whether the solution to the hard problem would also be a solution to the other-minds problem is too hypothetical, because no one has a clue of a clue what a solution would be: I think that if dualism had been true, the other-minds problem would still be there, but you could perhaps predict whether and what sentient beings felt perfectly, if you measured the "psychic" force.)

      Delete
  4. At the beginning of the course we talked about the fact that we can be sure of only two things, the first is the truths of mathematics and the second is the fact that we feel the way that we do. A redeeming quality of heterophenomenology is the fact that it gives our subjective experiences some value by acknowledging them as starting points and then tries to expand on them based on more objective measures. If our subjective experiences are one of the only things we can really be sure of it is important to consider them in some capacity.

    We must remember though that self-report of subjective experiences is limited by our capacity to relate our feelings to others as we experience them and as behaviorism demonstrated this introspection does not get us very far in explaining the easy problem let alone the hard one. At the same time, the objective “behavioral” measures which heterophenomenology wants to apply will not be able to tell us anything about the fundamental question of how and why we feel the things that we do. Just as we discussed about mirror neurons and neuroimaging, these techniques provide merely correlational data for the fundamental questions, nothing concrete or definitive and they do not provide the answers to the questions we are really interested in answering. In brief, I think that heterophenomenology does little more for the hard problem than brain imaging does for the easy one. We are limited by the other minds problem and any attempt to solve the problem will rely on correlational rather than causal techniques hindering us from ever solving it.

    ReplyDelete
    Replies
    1. Maria, I was also thinking about the methodological constraints of first-person data, third-person data, and the attempt to connect the two. I agree, self-report of subjective experiences is limited, and essentially it relies on our own assumptions about our experiences, and then it's modified in our linguistic reports to rely it to other people. In another class, when discussing third-person data on consciousness, I learned that the most useful third-person data are at the level of single neurons, whereby a researcher monitors representational content that correlates with the content of consciousness, and even this is limited to nonhuman subjects in most cases. What's even more difficult in research here is that there's not a general agreement on what is consciousness, so there's studies on executive function, metacognition, awareness/self-awareness, unconscious processes, etc. and all of these studies are presumed to be under the umbrella of 'consciousness'. No one doubts that heterophenomenology collects a lot of data, or as Dennett said "maximally inclusive science of consciousness," but the problem is that the study of consciousness is heterogeneous because there's no single consensus of what consciousness entails. With regards to the hard problem, it's really missing the mark because we can't just conflate feeling with consciousness. I'm not entirely sure how heterophenomenology even attempts to assimilate first-person data into third-person data (and we know science operates on third-person data). Moreover, Dennett never explained what "bracketing for neutrality" even entails.

      Delete
    2. Amber, 1st person "data" means felt experiece. If it didn't/doesn't feel like something to "have" (i.e., experience) it, it's not 1st person data.

      You'll get nothing but confusion with all the synonyms and variants of "consciousness." They're all weasel words. Replacing them all by feeling will never lead you astray, and it will sort a lot of it out. ("Stevan says"...)

      Delete
    3. I understand! However, Dennett was attempting to somehow make 1st person data into 3rd person data, so it can somehow be "objective"?
      I was highlighting research that relates to Dennett's heterophenomenology, and demonstrating that none of it is really relates to the hard problem, because consciousness is a weasel word.

      Delete
  5. Do we need to be able to feel to be able to do things? My assumption is that we do need to be able to.

    In the story Funes the memorious, Funes's incredible memory stopped him from being able to categorize, but he was still able to do many other things, such as talk. However, this was just a fictional story to show the power of memory. Surely, if he wasn’t able to categorize, he wouldn’t have been able to actually do anything.

    Similarly, how can we be sure that a zombie can do anything? If symbols get their meanings through symbol grounding and feeling, then they wouldn’t be able to categorize either, and thus the zombie thought experiment - in which my zombie can do everything I do except feel - doesn’t hold much weight.

    ReplyDelete
    Replies
    1. If you can explain how and why there cannot be a T3, T4 or T5 zombie, you have solved the hard problem (but just saying you don't see how we could do what we can do if we did not feel does not solve the problem).

      Delete
    2. I agree, Dennett is unnecessarily fixated on the zombie twin and the whole zombie example (or “zombie challenge”, as explored in the next reading). I don’t know Dennett actually believes there can be a zombie that can do anything (or do anything a person can do, if that’s what you meant) but I don’t think his point is to take a stab at the hard problem through the zombie thought experiment. In fact, the more I think of it the more I think maybe he didn’t think that zombie could really even be a zombie (not feeling)? I say this given his fixation on proving heterophenomenology does in fact include experience **feeling** and how he argues “Chalmers and his zombie twin are heterophenomenological twins”.
      I also agree with what you said though, it doesn’t much matter to argue that a zombie cannot feel if you don’t bother explaining how and why.

      Delete
  6. I really enjoyed reading the first part of this article because it really clearly laid out both sides of the argument and I felt like it blended together a lot of what we have been talking about in class and the different authors. Although Dennett makes a convincing argument throughout the paper for the A team, I still find it difficult to fully comprehend his side. The B team makes more sense to me, perhaps because it has been a large part of the dialogue of this course, but even if we were able to “make a robot that had thoughts, that learned from “experience” (interacting with the world) and used what it learned the way we can do”, it would not answer the hard problem of consciousness and how we know for sure what the robot would or would not be feeling.

    ReplyDelete
    Replies
    1. Yes, heterophenolmenology, whether T3, T4 or T5 solves neither the hard problem nor the other-minds problem.

      Delete
  7. I am not sure how “heterophenomenology, which was explicitly designed to be the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science” aids in supporting the A team’s argument. Is it the case that the robot, learning from experience, would then be able to describe verbally their internal states, therefore solving the hard problem? Similarly, because of false positives and false negatives, even if this were to be the case I am not sure how we could ever be sure of their internal states or solve Kant’s question.

    ReplyDelete
    Replies
    1. The hard problem is not describing feelings but explaining them causally.

      (And the 1st/3rd person distinction is a red herring. It's not about two "views" -- there's really only one "view" and it's a felt one -- it's about explaining how and why organisms can do what they can do (including what they say) and explaining how and why organisms feel.)

      Delete
  8. Dennett's idea, as good as it may be for understanding the easy problem, does not tackle (or even address) the hard problem. Self-reporting is limited to our own subjective view and cannot give a 100% accurate picture of what is actually going on in our brains. Even if heterophenomenology could somehow account for how experiences felt to different people, it would not account for why the experiences felt different in the first place. You would need to be able to heterophenomenologically assess a T3/4/5 zombie and show how it experiences the world (whether it feels or doesn't feel) in order to even start taking a stab at the hard problem.

    ReplyDelete
    Replies
    1. And even with T3/4/5 you're up against the other-minds problem (without having solved the hard problem...)

      Delete
  9. Dennett's model of heterophenomenology does not do much in explaining how we feel. By taking a 3rd person point of view to the hard problem, Dennett simply explains a first person's view of the world. This doesn't explain how that person came to have these feelings, simply that they did.

    Could it be possible that the other minds problem and the hard problem are an example of unsolvable proofs? Or is there possibility for an answer?

    ReplyDelete
    Replies
    1. There are theorems in maths that are (1) true and (2) it is provable (on pain of contradiction) that these true theorems are unprovable (Goedel's Proof).

      There is no proof (on pain of contradiction) that the hard problem in unsolvable. Most people think it's just hard, but not unsolvable. I think ("Stevan says") it's unsolvable, and on Tuesday I'll tell you some reasons I think it's unsolvable. (But it's just "Stevan says"...)

      Delete
  10. This paper addresses whether we can study the “feeling” portion of the human experience objectively. Dennett pits his team (the A-team), who argue that we can gain insight by studying the experiences of others, against Chalmers’ (the B-team), who argue that the only valuable way to study experience is through first-person data (examining “feeling” in our own experience). Interestingly, I find myself agreeing with the B-team throughout most of the paper, until the very end.

    Dennett argues in favour of heterophenomenology as a study of feeling. In other words, we collect as many subjective details as we can about a person’s experience (through self-report) and then measure those against all kinds of objective measures to draw as detailed a picture of their experience as possible.

    The fundamental objection I have to this, aside from the massive methodological constraints it presents, is the same as the one we have against simulation of cognition is computationalism: formally describing something isn’t the same as reproducing it. No matter how accurate or detailed, we don’t experience the world as a list of feeling and sights and sounds, that’s just how we communicate our experience.

    Not only that, but there’s no guarantee that anybody describing a conscious experience is actually having one (the Zombie Hunch). Dennett seems to dismiss this. He argues there isn't any important difference between zombies and ourselves (even to the zombie), although this seems to be because he interpreted Chalmers as arguing that zombies have a conscious mind:

    “[..] for surely we mustn’t assume that Chalmers is right that there is a special category of “phenomenological” beliefs–that there is a kind of belief that is off-limits to “zombies” but not to us conscious folks.”

    The very definition of a zombie is that it can do everything we do but doesn’t have access to the same phenomenological beliefs we do (or any beliefs for that matter). And, contrary to Dennett’s supposition, those phenomenological beliefs are the very ones we’re trying to study. These are the first-person fact’s we’re interested in.

    This is where I depart from the B-team though. Although it’s absolutely true that only we know what it’s like to feel what we feel, that’s of no use whatsoever in the context of science. If you want that information to be shared, then you’re defaulting to third-person measures, and you may as well advocate for heterophenomenology, because it’s more detailed than anything else. It’s the closest thing we have to Searle’s Periscope in light of the other-minds problem.

    As Dennett writes:
    “What a good idea: we can let subjects speak for themselves, in the first-person, and then we can take what they say seriously and try to systematize it, to capture the structure of their experience! And we could call it heterophenomenology.”

    This is all quite circular: the information we most desperately need access to is first-person data, but we can never get it, specifically because of the other-minds problem. Once we accept that, we default to heterophenomenology, because it’s the closest approximation we can get. But it doesn’t give us anything resembling what we were looking for to begin with.

    The real issue is that whether we get the information from a first- or third-person perspective is irrelevant to the question. We already know that we feel things. Both sides acknowledge that. How accurately we can describe/reproduce those this is irrelevant if we don't know how and why we feel them. Although Dennett critiques Chalmers for failing to offer a solution to the hard problem, heterophenomenology doesn't do any better. It's merely a formal description of cognition, and a very detailed account of when and where. It still doesn't get us any closer to how and why.

    ReplyDelete
    Replies
    1. The hard problem is about feeling, not beliefs-about-feeling.

      The A Team is wrong that "heterophenomenology" (which is just a fancy way of saying T3) solves the hard problem, or proves there is no feeling. It's just weather-reporting (predicting what people feel from the brain and behavioral correlates, including, of course, the verbal ones.).

      The B team is right that there is feeling, but that's all they say, really.

      Of course there are zombies (unfeeling things): Rocks and molecules and are zombies. Tosters are zombies. Today's t1 robots are zombies. Microbes and plants and organisms without nervous systems are probably zombies. Dead and brain-dead organisms are zombies.

      Isaure (T3) might be a zombie, but ("Stevan says") either she isn't a zombie or we can never know better.

      T5 zombies are a stupid idea (yet it's what Chalmers means). "Stevan says" T5 zombies are probably impossible -- but explaining why they are impossible would require solving the hard problem (of explaining how and why organisms feel).

      Delete
  11. “Here is Chalmers’ definition of a zombie (his zombie twin):

    Molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely . . . “

    Chalmer’s definition here doesn’t sit right with me. The way I’ve heard the zombie problem described in previous classes was that the zombie was functionally identical, but not completely identical, in other words it’s T3, not T5. The T3 version seems to just push the argument to “there must be something important about the materials.”, which results in postulating the T5 version anyway. My problem with that is, if you’ve copied everything molecule for molecule and the copy is missing something from the original, then either you have not actually copied everything, or you’re left claiming that what’s missing is something outside the laws of physics. To me, the latter is a nonstarter, so we’re left with the fact that you missed something in the creation of the replica.

    ReplyDelete
    Replies
    1. Of course there are zombies (unfeeling things): Rocks and molecules and are zombies. Tosters are zombies. Today's t1 robots are zombies. Microbes and plants and organisms without nervous systems are probably zombies. Dead and brain-dead organisms are zombies.

      Isaure (T3) might be a zombie, but ("Stevan says") either she isn't a zombie or we can never know better.

      T5 zombies are a stupid idea (yet it's what Chalmers means). "Stevan says" T5 zombies are probably impossible -- but explaining why they are impossible would require solving the hard problem (of explaining how and why organisms feel).

      Delete
    2. Oscar, I agree with your comment. My intuition tells me that if you would have a perfect (biology wise) copy of a human, then how could it possibly lack consciousness - if every single molecule is the same… To build on your comment, the reading also says “Notice that Chalmers allows that zombies have internal states with contents, which the zombie can report (sincerely, one presumes, believing them to be the truth); these internal states have contents, but not conscious contents, only pseudo-conscious contents.” If you are biologically identical, can have internal states and report them verbally, then how are you any different from any of us. In other words, in Chalmers’ definition of a zombie, how could you assume that this zombie lacks consciousness. It seems to me like it doesn’t make any sense to use this definition for further thought experiments.

      Delete
  12. It has been classically discussed that subjective phenomena cannot be studied in an objective framework. By this statement, consciousness cannot be objectively studied as it can only be accessed by the possessor. This also relates to the problem of other minds, in that we cannot assume the mentality of someone other than ourselves. Dennett explains his idea of heterophenomenology as a neutral path connecting the objective and subjective. However, Dennett’s idea of heterophenomenology is not neutral because we cannot prove or disprove subjective accounts. Scientific (third person) data will never perfectly align with subjective (first person) data. Framed in discussions of the hard problem of consciousness, heterophenomenology cannot tell us how and why we feel something because science does not feel, it makes an educated guess based on hard data. Verbal reports cannot be considered to be hard, unbiased data because verbal reports will always be biased to the individual that reports them.

    ReplyDelete
    Replies
    1. "Heterophenomenology" (which is just a fancy way of saying T3) does not solve the hard problem, or prove there is no feeling. It's just weather-reporting (predicting what people feel from the brain and behavioral correlates, including, of course, the verbal ones.)

      The hard problem is explaining how and why feeling organisms feel. The other-minds problem is that you can never know for sure. But with behavioral (including verbal) and neural correlates, you can do a fairly good job predicting whether, what and when organisms feel. That's what the summer school on The Other-Minds Problem in Other Species will be about.

      Delete
  13. Early on in the article, Dennett quotes Levine, who explains that the verbal explanations/descriptions of conscious experiences are not sufficient for answering the hard problem. I am inclined to agree with Levine (and disagree with Dennett) on this, because when a thought is verbalized, a small bit of that thought is already "lost in translation" due to the constraints on human communication; we do not always think or feel in sentences, or even phrases, so the verbalized correlates to these expressions of consciousness are not going to paint a fully accurate picture of consciousness.

    Additionally, Dennett's description of heterophenomenology being "good old third person scientific method applied to the particular phenomena of of human (and animal) consciousness" seems to be a sneaky way of equating third and first person data, which is not possible. In science, the former is widely regarded as objective, whereas the latter is thought to be subjective. You cannot ascribe objectivity to subjective accounts, unless these subjective accounts are absolutely identical across individuals. Heterophenomenology is simply third person data acquisition, plain and simple, and thus only addresses the "what/where" of consciousness, not the "how/why."

    ReplyDelete
    Replies
    1. The task of cognitive science is to explain how and why organisms can do what they can (the easy problem: explaining knowhow) as well as how and why organisms can feel (the hard problem).

      The other-minds problem is to determine whether and what organisms feel. (With humans, "heterophenomenalogy" helps a lot with mind-reading -- thanks to language, not thanks to the "science" of "heterophenomenalogy.")

      "Consciousness" is a weasel-word. Feeling covers it much more simply and straight-forwardly.

      The "contents of consciousness" are what we feel and know (it feels like something to know something).

      Delete
  14. The zombie fervently believes he himself is not a zombie. Chalmers believes he gets his justification from his “direct evidence” of his consciousness. So does the zombie, of course.
    I think I understand where Dennett is coming from in this passage, if zombies and real people are acting the same way then how are we able to tell them apart? If the zombie system is like us in all its behavior and thinking that it is like us then have we not come up with something similar? We’ve talked about how what differentiates us humans from machines is the fact that we are able to feel, but I think that the zombie reflects something similar when they say they have direct evidence of their own consciousness as well.

    ReplyDelete
    Replies
    1. The meaning of "zombie" is that it does not feel, it just acts. Hence the zombie has no "fervor," nor does it have beliefs. (It feels like something to believe X; without that, there is only data, processing, access, input, output.)

      If Isaure (T3) were a zombie, she would act exactly as she does, but she would not be feeling a thing. Therefore the reverse-engineering would be successful, but the mechanism would be wrong. (Underdetermination.) We have no way of knowing one way or the other (other-minds problem). And there's no point asking Isaure, because we know what she will say.

      T5 zombies are a stupid metaphysical speculation, and just another way of saying that we have no idea how or why organisms feel, hence that it's easy to imagine them able to do everything they can do (easy prob) without being able to feel. Easy to imagine (from ignorance) but not easy to design (T3-level, let alone T5-level!)

      And that's Turing's point: If you can't say more, one way or the other, don't.

      Delete
  15. Dennett argues that to investigate feelings we can solely rely on the verbal description of introspection as well as behavioural reactions to create an inner state of the participants. First of all, verbal accounts might not be a reliable narrative since we learned from Freud's introspection that it is subject to many interpretations and suggestions, not to mention that introspection does not provide any causal explanations for feelings.
    Then Dennett maintains in this paper that not only verbal description but also we should closely monitor their physiological reactions. However, how can we create a causal link between the reactions and feelings, since they are the effects of our feelings rather than the reasons? Also, many different inner state can lead to the same kind of physiological responses, raising heart rate might due to pleasure, fear, or anxiety.
    Also, because of the other mind problem, individuals might categorize their feelings differently which leads to a different ratings even when we are trying to do a "on a scale of 1 to 10" measure. It simply just cannot be like natural science since the things we are trying to measure here, the feelings, are undeducible and unmeasurable both to us and to a third person.

    ReplyDelete
    Replies
    1. I agree with what you're saying. The weakness of heterophenomenology is it's stated importance of verbal language. By relying on verbal output being a true reflection of internal and mental states, one leaves out the nuance of cognition and consciousness. As shown in numerous studies, alot of the processes behind cognition come before consciousness, and one has to be conscious of something to be able to communicate it the through language. Dennet tries to solve this problem (or at least to try corroborate the verbal output) by analyzing the physical context around the speech. However I argue that this is not a solution. This is because language and physical context which are argued to be indicative of the what the feeling is on the inside and are all that matter. If this is true we should be able to recreate everything in this situation and have the same internal and mental state, however this explanation is basically computationalism and has already been refuted.

      Delete
  16. “But all other such data, all behavioral reactions, visceral reactions, hormonal reactions, and other changes in physically detectable state are included within heterophenomenology. I thought that went without saying, but apparently these additional data are often conveniently overlooked by critics of heterophenomenology.”
    Heterophenomenology seems to be an extension of the Turing Test. It tests the reaction of an organism in what they say, and what they do both physically and chemically in the brain – hopefully capturing the dimensions of feeling. In this way, this method of scientific testing attempts to come to an objective definition of how and why organisms feel, but only describes correlates of this feeling process. In the end this test measures what we do, not how we feel and why. I am a firm believer that we cannot solve the hard problem, as Turing predicted there is no way to objectively measure the phenomena of feeling.

    ReplyDelete
  17. I agree with you that introspection gives us nothing when it comes to the hard problem. We are no wiser about the internal mechanisms that govern qualia after writing down how we feel about a certain topic. However, we do know that it has to be some function of the neural network within our brains that allows this to occur. This is mainly because of lesion studies where people lose qualia in a given sect of life because they do not have the same connectivity that they once did.

    ReplyDelete
    Replies
    1. : I agree with your point that introspection doesn’t help us much in figuring out our own way of feeling or cognition. We will never be able to take a step back and objectively think about what’s going on in our heads in terms of cognition. At the same time we’re also not able to take an outsiders point of view and just attach certain bodily signals to what’s going on in our minds since we are not able to verbalize exactly what is happening. This to me is why although heterophenomenology seems like a promising idea it can’t actually be applied in reality.

      Delete
  18. "As I like to put it, we are robots made of robots–we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent."

    Robots made of robots - this is an interesting choice of words here by Dennett. It is true that we are made up of many cells, each of which is a "robot" because they are unconscious, unfeeling, and basically agents of computation. He does seem to be aware that the sum of all these robotic parts amounts to consciousness in human beings. However, it's a double-edged sword - by saying that WE are also robots is implying that our ability to cognize is replicable in a robotic computational machine, which we know is not the case (at least not yet). As we've discussed in this class, computation is part of the story but there is also something else going on. The something else is the hard problem, which is exactly what Dennett is denying the existence of in this paper.

    ReplyDelete
    Replies
    1. I agree with you in the sense that it is not accurate to compare humans to a robot because it does not take into account feeling even if there is computation and we do succeed with the symbol grounding problem. From this article it seems as though Dennetts approach towards the hard problem (but more towards the easy problem)is through heterophenomenology which from what I understood is basically examining someone elses thought from a third person point of view which is unreliable. We all interpret things differently depending on circumstance and environment and trying to describe those things from a subjective point of view is a bit far-fetched. That being said, it doesn't really explain the hard problem with regards to feeling.

      Delete
  19. Heterophenomenology attempts to objectify the subjective mentality of another individual. Heterophenomenology combines the individual’s beliefs with their physical state and context to 'bracket it for neutrality”. The first- and third-person (or rather ‘felt’ and ‘unfelt’) data may not always align however, and thus when the subject’s internal verbal report does not match the external experience and context, the subject’s report should always win (and therefore, it is not always neutral). One cannot deny or disprove that a subject still has such feelings, even if these feelings are ‘wrong’ in the objective and observable context.

    Furthermore, even if we are able to harness the power of heterophenomenology to better quantify and objectify subjective conscious experiences, and we do find correspondences between third-person data and first-person cognitive states, as Fodor previously highlighted – causation cannot be inferred from correlation. Therefore, even in providing us with correlations between ‘feelable’ and ‘unfeelable’ data, heterophenomenology would still fail to provide us with an explanation of the why and how we feel anything at all.

    ReplyDelete
  20. I think Dennett tries to address an important problem about solving the easy or the hard problem: that ultimately it is a group of people who are doing and feeling in the 1st person trying to generalize it to an objective 3rd person. Not only does this completely refuse the possibility of the other minds problem, it assumes that 1st person experiences can be summed up or generalized to 3rd person objectivity. My understanding is that even if we discard the other minds problem and declare that when two people are in the same external state they are also in the same mental state, let’s say they are looking at the color red, it seems oversimplifying to say that this experience is in absolute the same in everybody, taking Dennett’s idea to the extreme. However, if we only take in the external states, that only solves the easy problem even though it is possible in this case to generalize two people’s external states.

    ReplyDelete
  21. From what I understand, heterophenomenology is claimed to be an objective, scientific, and empirical way to study consciousness. Through verbal statements, and measurement of physiological changes that accompany these statements, proponents of heterophenomenology create a "phenomenological profile" of consciousness. I am with Chalmers in that I do not understand how we can study consciousness through subjective verbal statements - introspection, though it must be considered because our subjective experience is the only thing we can be sure of, does not provide much insight into an answer to the hard problem (why and how do organisms feel?) Nor do the behavioural measurements that heterophenomenology takes into consideration. They simply provide correlational data for the questions we need answers to, but do not give clues to any causal mechanisms. Just as mirror neurons and neuroimaging techniques only provide correlational data for answering the easy problem.

    Introspection does not help with answering the other-minds problem either (how can we tell whether and what other things are feeling?) because of the Zombie Hunch - there is a zombie, identical to humans in every way, that believes he undergoes conscious experience but in fact does not. For me, this Zombie Hunch finds the flaw in the heterophenomenology method, because the Zombie would answer and react to questions as if he believed he did have conscious experience, which would be indistinguishable from the answers of humans. If a Zombie could be a legitimate subject of heterophenomenology while not having any conscious experience at all, then clearly this method is not fulfilling its goal of studying consciousness in a completely objective and sound manner.

    ReplyDelete
  22. I get where Dennett is going with first person science being bias as you would only be getting different accounts of what would be considered subjective data, he also makes the point that your view begins to connect into heterophenomenology if you start using 3rd person methods. All this being said, it still does not help solve why we feel and instead focuses and draws the reader’s attention to other issues that do not help solve the issue of why and how we feel. Although he elaborates on doing and beliefs, the feelings part is carefully left to the side or mistaken for something else which does nothing to help solve or challenge the hard problem.

    ReplyDelete
  23. What confuses me about Dennett's argument is that he highlights the importance of subjective experience and that it is necessary to understand consciousness. However then he goes on to discuss the fact that there are some experiences, that certainly occur in the brain, that we do not report on. Such as subconscious experiences, peripheral view, etc. Here he begins to seemingly dethrone this subjective experience that he previously brought up so he can push for systematizing these subjective experiences therefore making them more objective.

    I feel that although there are brain functions that we do not report on, these experiences still inform the experiences that we do report on, and therefore do not discredit the significance of our subjective experiences but simply add more dimension to them.

    ReplyDelete
  24. Dennett makes some important points in his scathing criticisms of Chalmers. He focuses on the importance of heterophenomenology and how it is in all ways superior to 1st person phenomological research. In fact, he makes a strong case on how most cases used in 1st person research are really just 3rd person heterophenomenology. However, I feel as though inherently Chalmers is right about 1st person experience and feeling. Our subjective 1st experiences can never be adequately captured through any amount of description in our own words, or measurement through scientific methods (e.g. fMRI). Descriptions of our experiences are reductions, and as Chalmer’s points out, these are descriptions are heavily prone to error (they are memories after all). However, I cannot really even conceive of what a 1st person science would be. Ultimately science, science of consciousness included, involves the 1st person evaluation of subjective experiences or completion of a task, which is then evaluated by another entity, a 3rd person. Perhaps this is Dennett’s point? Therefore, I agree that the best way to conduct research on consciousness (computationalism aside) is heterophenomology. However, I am not necessarily convinced of the usefulness of this type of research. As, truthfully, subjective experience in all its complexities can never hope to be adequately captured.

    ReplyDelete
    Replies
    1. I completely agree with your statement that by nature of describing a 1st person experience (i.e. your feelings) it becomes 3rd person. Is describing feelings that helpful to research on consciousness though? How could heterophenomenology move beyond being descriptive and provide any insight into causal explanations? I don’t have an alternative solution to offer of course, but stating all the functional facts about feelings doesn’t explain them.

      Delete
    2. Just like a simulated waterfall will never be wet, a described feeling will never be felt. (Not that we're trying to reproduce feelings, but rather explain them)

      Delete
  25. Although heterophenomenology is a good science method for determining the WHAT is happening during human consciousness, which is an attempt to answer the hard problem, "Heterophenomenology is nothing but good old 3rd-person scientific method applied to the particular phenomena of human (and animal) consciousness." However, I believe Dennett misses the point, when he states, "What has to be explained by theory is not the conscious experience, but your belief in it (or your sincere verbal judgment, etc)," when in actuality the theory if it truly aimed to answer the hard problem is not answering the what is happening (the physiological data, the brain activity etc) when we feel and likewise your level of belief in your conscious experience gets us no closer to answering the how and why an entity feels.

    "I don’t stipulate at the beginning of the day that our subjective beliefs about our first-person experiences are “phenomenological” beliefs in a sense that requires them somehow to depend on (but not causally depend on) experiences that zombies don’t have!"

    The above quote, is precisely what is wrong with heterophenomenology. Just asking a person how they feel about an experience, would be the equivalent of dealing with a T3 robot who does what you do, and is verbally indistinguishable, but due to the other minds problem we can't ever be quite sure there relaying of conscious experience is based on true consciousness, or if this is basically the type of method that Searle's CRA showed to be a failed path towards answering the hard problem.

    ReplyDelete
  26. When reading this paper, I found the defense of heterophenomenology actually covered most of the criticisms that are lobbed at it. I do not think that heterophenomenology is a wasted effort, but I did not find that the paper touched at all on what it is to understand feeling. While recording of first person experience and introspection may provide useful correlational data and create an interesting documentation of the human experience, it really doesn't at all touch on the hard problem.

    I caught myself wondering how these studies would aid in the passing of the turing test. Just because we may find some reliable correlation between environmental stimuli and first person experience - how would we make a robot have these experiences? This puts us no closer to understanding the hard problem, and in terms of creating artificial intelligence this at best would allow programming that relies on correlational responses being coded for.

    This account also does not illuminate anything new regarding the other minds problem, and the first person data of heterophenomenology must be chalked up to resulting from brain states or some other explanation that fails to explain how and why.

    ReplyDelete
    Replies
    1. "I have argued, to the contrary, that subjects’
      beliefs about their subjective experiences are the central data."

      Delete
    2. i think you hit the main issue with heterophenomenology. It does not cover the "hard problem" of cognition at all. It only serves to tell us that feelings are a sort of brain state and that there is some correlation between the environment and our feeling. It tells us that feelings are essentially our actions and behaviors, and it tries to create a causal reasoning for feeling but only accomplishes a correlation. It provides no information as to how or why we feel how we feel or are designed to feel how we feel, and only serves to try and tell us "what" feeling is, as opposed to the far more important questions that can truly help us understand "feeling" as a whole.

      Delete
  27. “Turing showed us how we could trade in the first-person perspective of Descartes and Kant for the third-person perspective of the natural sciences and answer all the questions–without philosophically significant residue.”

    Dennett's article discusses the various ideas of the A team (i.e., those in support of Turing's question) versus the B team (i.e., those who believe Turing is leaving out consciousness and does not address the hard problem). After reading through Dennett's points, I would have to agree with the B team. The main point that turned me off from the A team's claim is that their focus is more on beliefs not on experiences themselves. I agree with the B team that experience is an essential thing to code for especially relating to consciousness. The A team doesn't address the hard problem (i.e., figuring out how and why we feel/how to measure consciousness) and instead assumes that all the other measurements that are part of heterophenomenology, including speech, internal processes, etc. are enough of an explanation.

    ReplyDelete
  28. “…he will be awake, able to report the contents of his internal states, able to focus attention in various places and so on.  It is just that none of this functioning will be accompanied by any real conscious experience.  There will be no phenomenal feel. There is nothing it is like to be a Zombie.”

    This is taken from Chalmer’s zombie argument, which argues that a zombie is something that could have internal states and that can verbally report them. Therefore, we can’t prove that they do or don’t have consciousness because if we asked them they would tell us that they did and we can’t look inside of there mind to check. This makes sense to me, and I can see how it applies to T3, as any T3 could be a zombie, but we could never really know. Initially, I thought that Chalmer was wrong because it didn’t make any sense for something that is T5 to be a zombie (probably just because it’s hard for me to imagine something exactly like me that doesn’t truly feel). However, how would we categorize psychopaths? Aren’t psychopaths technically zombies as they don’t have emotion the "ways" in which others do. I get that if you kicked them it would “actually” hurt, but can’t it be argued that it would actually hurt a T3 robot as well, because they would tell you it did?

    ReplyDelete
  29. "From the recorded verbal utterances, we get transcripts (e.g., in English or French, or whatever), from which in turn we devise interpretations of the subjects’ speech acts, which we thus get to treat as (apparent) expressions of their beliefs, on all topics. Thus using the intentional stance (Dennett, 1971, 1987), we construct therefrom the subject’s heterophenomenological world."

    I have a few issues with heterophenomenology. One of them is what was said above about "devising interpretations" in regards to what is said during the heterophenomological analyses. The standards under which this interpretation aren't exactly clear to me and this aspect of heterophenomology seems rather similar to classic introspective analysis. Furthermore a more fundamental issue is that I am not exactly sure that the method is a true method for evaluating consciousness. Rather, as has often been said in class, it predicts the conditions under which certain expressions of consciousness come out, but doesn't explain how or why they do.

    ReplyDelete
  30. In his text, Dennett demonstrates a clear misunderstanding of Chalmers’ definition of “zombie”. Dennett seems to think that zombies have consciousness or, simply put, that they feel things. Indeed, Dennett states, “The zombie has the conviction that he has direct evidence of his own consciousness, and that this direct evidence is his justification for his belief that he is conscious”. However, as Dennett quotes, Chalmers clearly defines a zombie as an entity that does not have consciousness, a being that “lacks conscious experience entirely”. Furthermore, Chalmers states that zombies have “[…] no phenomenal feel. There is nothing it is like to be a Zombie.” In other words, there is nothing it feels like to be a zombie. Zombies are exactly like us humans in the things they are able to do, except that they lack feeling.

    Due to Dennett’s misunderstanding of what a zombie is, his arguments regarding zombies do not challenge Chalmers’. Chalmers’ “Zombic Hunch” is simply the question of how and why we are not zombies, or, in other words, how and why we feel things (the hard problem), questions that Dennett does not directly address.

    ReplyDelete
  31. I do not understand how heterophenomenology can be a scientific method for the study of consciousness due to inherent contextual and methodology inconsistencies. It is impossible to place yourself outside of one’s own consciousness for the examination of the self’s consciousness or the examination of the consciousness of someone else. Consciousness is a filter that skews the primary data that heterophenomenology claims so strongly to prioritize. No matter how hard you attempt to separate consciousness from primary data, the very fact that the person engaging in a study has beliefs and ideas that exist means that consciousness cannot be separated from a study of some consciousness. The attempt to isolate primary data is a redundant, ugly, circular trap.

    Even more troubling is that consciousness seems to be necessary to study consciousness. The task of studying consciousness cannot be outsourced to something (perhaps a machine) that does not experience consciousness itself. A data collection machine does not have the tools to understand/interpret consciousness or even collect data about consciousness without having some concept of consciousness. In order for a machine to be viable option to collect data about consciousness, it would need the tools to understand consciousness, which means that consciousness needs to be codified and reduced to elements. We would need to already understand consciousness to create a machine that understands consciousness so that it can help us understand consciousness. The concept is impossible and an attempt at its execution would be futile.

    ReplyDelete
    Replies
    1. "Conscious" is only conscious if it is felt. 1st person data is all experience that is felt and I think it is a very valuable attempt to recenter the focus on the subjective felt experience that is the hard problem - Co-habiting with the felt state inside the head is the feeler.


      I agree that when asked to describe phenomenal/conscious/felt states, one inevitably ends up going into 3rd person perspective and the homuncular problem of representation arises again; if we have these internal representations who is the one doing the representations ? In trying to describe one's own experience, we come to 're-present' that experience.
      Introspection doesn't help us understand how or why we feel. How can we move past descriptions into finding causal mechanisms, especially if take out feeling, we can still do all that we can do? A description is a simulation, not the thing itself.

      Delete
  32. My intuition is to disagree with the conclusions of this paper though it has certainly affected my conception of a mind. It is cool to imagine the possible futures of cognitive augmentation as science and technology continue to step closer to connecting the mind with computer devices. A insertable brain module as described in the article sounds like the tool/gadget of the future, but language and tools like notebooks have been facilitating brain function for millennia in a similar way: such tools will probably only affect a brain/minds doing capacity.

    On the notebook example, Otto feels like he believes he knows the Museum’s address because he can depend on it the way Inga can depend on her memory. But the notebook does not cause the feeling of belief (to rephrase, the notebook does not have a causal effect on Otto’s mind). Likewise, it is not the case that without a notebook, Otto would be unable to have such a feeling. Just like for normal humans who have their memory systems in tact, the brain is a necessary yet insufficient (not causal) condition for a mind. The conjunctive use of Otto’s brain and notebook is necessary but insufficient for his mind and feelings.

    Showing that Otto and Inga both feel (believe) with or without “externalization” of memory does not imply that any externalized module is part of a mind. If anything, it shows that the mind must exist in the intersection of Otto and Inga’s cases (rather than the union or difference) meaning that the causal mechanism for feeling must be something both Otto and Inga have, and likewise cannot be something exclusive to either case.

    A reasonable conclusion from this parable is that memory is needed for a mind, regardless of it’s implementation; without memory, we couldn’t learn categories or features, effectively no different than the infinitely memorious Funes. Recall that the key to terrestrial life’s ability to learn categories is selective forgetting! Maybe an inserted memory module in the brain with huge data storage could break the human mind by taking away our ability to selectively ignore and forget.

    ReplyDelete
  33. This comment has been removed by the author.

    ReplyDelete
  34. "We are robots made of robots"

    I loved this take on the problem of consciousness. The idea that we are composed of a few trillion robotic cells, none of which have volition, or consciousness, or free will, but that those cells in combination somehow work together to build human beings, who think and feel and act, is a thrilling thought for philosophy and AI research. This is an excellent way to frame the beginnings of a response to Turing's question, "how can we make a robot that had thoughts, that learned from "experience" (interaction with the world) and used what it learned the way we can do?" The only problem is that this response fails to account for the big black box of consciousness. Even if we built such a robot, with trillions of robotic cells and who could do all the human like things mentioned about, we still fall short of explaining the "how and why" of what they would do.

    "The objection lodged in my paper to heterophenomenology is that what cognitive scientists actually do in this territory is not to practice agnosticism. Instead, they rely substantially on subjects' introspective beliefs (or reports). SO my claim is that the heterophenomenological method is not an accurate description of what cognitive scientists (of consciousness) standardly do. Of course, you can say, (and perhaps intended to say, but if so it wasn't entirely clear) that this is what scientists should do, not what they do do."

    Although Dennett makes a convincing argument for the A-team, I can't help but be swayed by Alvin Goldman's issues with heterophenomenology. I can't help but be reminded of introspection when reading the procedure that takes place. Having just discussed the weaknesses surrounding human consciousness, (ie the false beliefs humans have about their visual perception), it seems a faulty procedure to be relying entirely on verbal self-reports to examine consciousness.

    Dennett disagrees on the grounds that heterophenomenology should and does fall into the cannon of what cognitive scientists do do, which I don't think is the best way to dissuade the audience from Goldman's position. Though Dennett does provide cases where subjective reports where called for, not all the cases hold up with the modern literature.

    ~

    Dennett does not go into to much detail in the article, but I think an important distinction worth pursuing is that of subjective experience vs. objective stimulus. There seems to be this huge divide which is leading to two schools of thought on how cognitive science research should be done. I supposes its about how much you value the objective stimulus coming in, or whether you are interested exclusively on what the human mind is able to register and manipulate from that experience.

    "-molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely… he is embedded in an identical environment."

    Can a zombie exists in such a way? Doesn't their molecule for molecule identical composition bring with it consciousness implicitly? If it doesn't, I would challenge Chalmers to identify the missing element/ system/ structure from the zombie that is responsible for its unconscious state.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: This should get you to the this year's introductory video (which seems to be just audio):  https://mycourses2...