1
   

The Mereological Fallacy in Neuroscience

 
 
Aedes
 
  1  
Reply Thu 26 Feb, 2009 11:12 pm
@nerdfiles,
nerdfiles;50919 wrote:
I highly doubt the point is a fruitful one that a claim does not look, stylistically, like "science" and that this is supposed to deny it from being advanced as a scientific claim.
What I'm saying is that because it doesn't look like formal scientific writing, it makes me question whether it actually was. Where is your quotation from -- do you have a citation for it that suggests whom he was speaking to and if it was meant to be the advancement of a scientific claim?
nerdfiles
 
  1  
Reply Thu 26 Feb, 2009 11:19 pm
@Aedes,
Aedes;50920 wrote:
What I'm saying is that because it doesn't look like formal scientific writing, it makes me question whether it actually was. Where is your quotation from -- do you have a citation for it that suggests whom he was speaking to and if it was meant to be the advancement of a scientific claim?


Amazon.com: Astonishing Hypothesis: The Scientific Search for the Soul: Francis Crick: Books

It suspect, however, that you are going to take the argumentative route that any "book" not solely concerned, by its author, with being published in a "scientific journal" cannot possibly be treated as legitimate science. I really hope you do not take this route, for then you might at the same time have to accept that eliminative material, physicalism, and various forms of reductivism simply are not science.

I would readily admit this. When ontological reductionism comes into the picture, the scientist has taken off her lab coat and, more or less, turned to armchair philosophizing. However, that does not change the fact that these thinkers take themselves to be advancing a scientific theory. They are, in effect, trying to push the boundaries of science, to give it more explanatory power. Of course, there reaches a point when (in cases like the mereological fallacy) they explain nothing at all.
Aedes
 
  1  
Reply Thu 26 Feb, 2009 11:29 pm
@nerdfiles,
nerdfiles;50921 wrote:
I suspect, however, that you are going to take the argumentative route that any "book" not solely concerned, by its author, with being published in a "scientific journal" cannot possibly be treated as legitimate science.
Awfully touchy. :sarcastic:

The legitimate communication of original scientific research can be in a book, a journal, a presentation, whatever. But what they have in common is a chain of thought from a hypothesis through a method through results through interpretation.

If this phrase you're selectively quoting here is in a book written for a lay audience, but his original research it's based on does not use anything resembling this language, then it would be fair to conclude that he was choosing language for accessibility and not precision.

I don't know -- but I'm going to keep an open mind. You're expounding upon a quote whose context you may not have fully considered. Furthermore, I wonder if you're generalizing about the extent of this mereological fallacy as a chronic blight within neuroscience communications but without evidence that it's very widespread.

Again, I don't know. But it's worth asking the questions.
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 12:03 am
@Aedes,
Aedes;50923 wrote:
Awfully touchy. :sarcastic:

The legitimate communication of original scientific research can be in a book, a journal, a presentation, whatever. But what they have in common is a chain of thought from a hypothesis through a method through results through interpretation.

If this phrase you're selectively quoting here is in a book written for a lay audience, but his original research it's based on does not use anything resembling this language, then it would be fair to conclude that he was choosing language for accessibility and not precision.

I don't know -- but I'm going to keep an open mind. You're expounding upon a quote whose context you may not have fully considered. Furthermore, I wonder if you're generalizing about the extent of this mereological fallacy as a chronic blight within neuroscience communications but without evidence that it's very widespread.

Again, I don't know. But it's worth asking the questions.


I wouldn't take myself to be touchy. I just get that line of argument a lot. If anything, I should be seen as at least siding with the scientist in his or her endeavor, if not to reject its conclusion in the end. It is fruitful, perhaps, for scientists to extend the reach of science beyond its scope. My comments were an attempt to preemptively attack the idea that scientists who write outside of journals or outside of the "style" of science are automatically not to be considered attempting the scientific. Indeed, it is the content of their work that we should judge, not whether or not it is written in a certain form or appears in a certain journal. Why? Because they wish to step outside of their "form," outside of their conservative notion of validity. Nevertheless, I still disagree with Crick in his claim.

I did cite Blakemore as well. I could cite others, if necessary. But that really distracts us from the argument at hand--whether or not they're correct. Whether or not it's widespread is not my concern.

But so the claim you make is that he's appealing to an audience with imprecise language.

"Neurons have knowledge" and "What you see is not what is really there; it is what your brain believes is there.... Your brain makes the best interpretation it can according to its previous experience..."

I cannot truly see these as "rough claims" that could be better said (or more "scientifically" said). Are we to delve into a radical skepticism about language? Or perhaps a dogmatic formalism?

These writers are using our language to express their claims. He's setup the context. "What you see is not what is really there." I fully admit that sometimes statements can be taken out of context, definitions may not be forthcoming. But you really have to call a spade a spade on this one.

His claim is rather straightforward. "Your brain believes" and "interprets." For one, what else could he mean? Surely he doesn't have a more "precise" term that only by degree expresses the same concept. If he did, why not simply use that one? Are you to chalk this kind of language up to mere condescension? Or pretense? When I see the verb "to interpret," I don't think to myself, "ah, this is just shorthand for something far beyond what I could understand." In some way, "interpret" has to conform to our conventional meaning. Furthermore, a "belief" is not a state. So it would make no sense to identify "believing" with a brain state. You cannot identify when a person is believing. And you certainly cannot identify with a brain is believing. Believing just isn't the kind of thing that refers to a state.

We really don't need to be philosophically charitable here. The claim is rather plain, and it stands on its own without us going further into his definitions.

[INDENT]"You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.[/INDENT]

Really, his claim is rather barefaced, and I doubt we need to have a long-winded discussion consisting of "Is that what he really means?"

Yes, indeed it is what he means.
Mr Fight the Power
 
  1  
Reply Fri 27 Feb, 2009 08:14 am
@nerdfiles,
nerdfiles wrote:
It is essential that we have a brain in order to be angry, thoughtful, respectful, and to have had paid attention to things in the past as well as to pay attention in the present and future. But no (empirical) evidence will make the proposition "Indeed, my brain felt hurt by your rudeness yesterday" have sense.

Drilling "deeper" down into the complexities of the brain will not make that statement have sense. The experience of thought is not just more neurons being at work. Thought is of a category quite other than physical categories. Whether or not it is "mental" is another question. "The mental" as an ontological category is quite troublesome to deal with, but the psychological sciences are perhaps the best avenue to take. The neuro-psychological sciences may work with neuroscientists to correlate psychological predicates with neural states, but it still remains that no psychological predicate can be logically, or conceptually, identified with a neural state. And it is especially conceptually flawed to reduce one to the other (strong identification/reductionist/eliminativist).


What evidence lends truth the proposition "I felt your rudeness yesterday".

How do we connect psychological predicates to people?
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 08:49 am
@Mr Fight the Power,
Mr. Fight the Power;50988 wrote:
What evidence lends truth the proposition "I felt your rudeness yesterday".

How do we connect psychological predicates to people?


By criterial evidence, namely through sensation-behavior and verbal-behavior. Evidence of this sort is not inductive, but logical evidence for ascription of a psychological predicate. Further, since behavior is criterial evidence for ascription of a predicate, what I argue is not behaviorism. There is no logical identification, or reduction, of the predicate with the behavior on the basis of which it was applied. Identification or reduction would involve a definition of "predicate as behavior-X, behavior-Y, behavior-Z." Criteria only plays a part in the meaning of the predicate. Psychological predication is not equivalent in meaning to he behavioral description.

If you indeed felt my rudeness yesterday, you might simply recall that I made such-and-such rude utterance or gesture. The predication of such behavior is made possible because it is public and criterial. Also, the meaning of the predicate in question is partly logically bound to the criteria. Thus, had I responded to you in some other way, you might have used a different, logically relevant predicate.
Aedes
 
  1  
Reply Fri 27 Feb, 2009 09:43 am
@nerdfiles,
nerdfiles;50927 wrote:
But so the claim you make is that he's appealing to an audience with imprecise language.

"Neurons have knowledge" and "What you see is not what is really there; it is what your brain believes is there.... Your brain makes the best interpretation it can according to its previous experience..."

I cannot truly see these as "rough claims" that could be better said (or more "scientifically" said).
And I think it's fairly obvious that he's NOT making a formal scientific assertion about the brain, but rather simplifying a complex topic by anthropomorphizing it.

nerdfiles;50927 wrote:
you really have to call a spade a spade on this one... His claim is rather straightforward. "Your brain believes" and "interprets." For one, what else could he mean? Surely he doesn't have a more "precise" term that only by degree expresses the same concept.
I think you are most certainly missing the context. This is a simplification of what probably amounts to a whole body of research. Anthropomorphizing or personalizing the brain may be a lot more digestible to this chosen audience than presenting the entirety demonstrable science behind his subject. Furthermore, what if in the following pages or chapters he overtly clarifies what it means when he says "the brain believes"?

nerdfiles;50927 wrote:
Are you to chalk this kind of language up to mere condescension? Or pretense?
Try and teach Wittgenstein to a group of 7th graders. It's neither condescension nor pretense to rephrase something for a lay audience. I tend to use analogies as my tact when I present complex medical issues to patients -- I explain homeostasis mechanisms in terms of a bathtub, for instance (balancing the amount coming through the faucet with the amount leaving in the drain). Is that dumbing it down, when I should really be explaining it in terms of hematopoiesis or natriuresis or whatever? Maybe, but I'm not going to get anywhere with jargon.

Feel free to look up my most recent publication (Lantos et al, Parasitology (2009), 136, 1-9), I'm the first author. How readable is this to someone who is not a biologist? I don't know how much teaching experience you have, but peer-to-peer communications are NOT going to work for a lay audience or even an audience of students. And a lot of popularizations we have (like Stephen Hawking's books, or Stephen Jay Gould's books) are specifically written (and therefore successful!) because of their accessibility. If one ever needs more detail about the science, the original references are always there.
Mr Fight the Power
 
  1  
Reply Fri 27 Feb, 2009 10:01 am
@nerdfiles,
nerdfiles wrote:
By criterial evidence, namely through sensation-behavior and verbal-behavior. Evidence of this sort is not inductive, but logical evidence for ascription of a psychological predicate. Further, since behavior is criterial evidence for ascription of a predicate, what I argue is not behaviorism. There is no logical identification, or reduction, of the predicate with the behavior on the basis of which it was applied. Identification or reduction would involve a definition of "predicate as behavior-X, behavior-Y, behavior-Z." Criteria only plays a part in the meaning of the predicate. Psychological predication is not equivalent in meaning to he behavioral description.

If you indeed felt my rudeness yesterday, you might simply recall that I made such-and-such rude utterance or gesture. The predication of such behavior is made possible because it is public and criterial. Also, the meaning of the predicate in question is partly logically bound to the criteria. Thus, had I responded to you in some other way, you might have used a different, logically relevant predicate.



When I used that example, I was contrasting "I felt your rudeness yesterday," (which makes sense, according to you) with "My brain felt your rudeness yesterday" (which doesn't make sense, according to you).

I am trying to grasp how my utterance of the statement "I felt your rudeness yesterday" qualifies as criterial evidence that I possessed the quality of feeling.

Secondly, if this does provide criterial evidence of feeling, how does it identify just who or what is feeling? How does it address the problem at hand?

Perhaps I just don't understand your argument, but I cannot make sense of how you can extend behavior past inductive evidence, as I see no necessary link between behavior and consciousness.
Mr Fight the Power
 
  1  
Reply Fri 27 Feb, 2009 10:34 am
@Aedes,
Aedes wrote:
And I think it's fairly obvious that he's NOT making a formal scientific assertion about the brain, but rather simplifying a complex topic by anthropomorphizing it.


Here's how I took it:

A person is identified as a collection of lesser identifiable parts. Any action attributed to a person, as a result, can be reduced to action by the lesser identifiable parts. When we say "John runs", we can break this down into actions assigned to legs, arms, and other parts of John.

In this sense, it makes no sense to say "John's legs run". Where John possesses all of the necessary components to actually run, his legs do not, and even though we may associate running with the activity of the legs, they cannot run without other parts of John.

So I don't think the question concerns anthropomorphization, rather whether "John believes" is literally the equivalent of "John's brain believes", in that all activity and qualities that allow us to apply the predicate to John are also present in the brain.
0 Replies
 
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 05:00 pm
@Aedes,
Quote:
And I think it's fairly obvious that he's NOT making a formal scientific assertion about the brain, but rather simplifying a complex topic by anthropomorphizing it.


I'll readily admit, and I think I have done this, that he is not making a claim, or body of claims, that should be strictly taken in, say, a scientific journal. However, what is the nature of this claim? What is he doing exactly? He might not be an eliminative materialist, but this is exactly my point. I do not at allow wish to argue with you over what the conditions are for some claim or some text to be considered "scientific." I could ask you "just what does 'formal' mean?" But I will not. This is a philosophy forum, and this my opponent is a philosophical school/thesis: eliminative materialism. Their claim is explicit that our "folk psychological predicates" do not exist. Their justification is scientific, in that they claim "beliefs" will be eliminated like how witches and phlogiston were shown not to exist.

Whether or not they take themselves to be scientists or whether or not scientists do X, Y or Z is not my concern. (A concern we have been preoccupied with.) I'll readily admit that scientist must write "pop" when they need to pay the bills or to cater to the baseline intellect of their audience. In a sense, every author must write to his or her intended audience. This much is, and has been clear to me. Thus, I do not need it explained to me, as if it were to explain away what my claim is. My claim is that Crick is advancing a claim to push the boundaries of what counts as scientific. I explicitly and absolutely deny that he is simply writing shorthand or writing for a particular audience. He is making a claim that is to pass off as valid science, just as the eliminative materialists do.

Quote:
I think you are most certainly missing the context. This is a simplification of what probably amounts to a whole body of research. Anthropomorphizing or personalizing the brain may be a lot more digestible to this chosen audience than presenting the entirety demonstrable science behind his subject. Furthermore, what if in the following pages or chapters he overtly clarifies what it means when he says "the brain believes"?


Now, perhaps, we have a fundamental disagreement about explanation. "To simplify," under my understanding, means to "explain in a simpler way."

First, he's not using believes*, as opposed to believes.

Second, "anthropomorphization" is not to explain anything, but only to mislead. To anthropomorphize is not to explain. Now the question is: Is he attempting to explain or merely describe? Description in anthropomorphic terms turns into nothing but fanciful metaphor. Explanation in anthropomorphic terms explains nothing at all.

Quote:
Try and teach Wittgenstein to a group of 7th graders. It's neither condescension nor pretense to rephrase something for a lay audience. I tend to use analogies as my tact when I present complex medical issues to patients -- I explain homeostasis mechanisms in terms of a bathtub, for instance (balancing the amount coming through the faucet with the amount leaving in the drain). Is that dumbing it down, when I should really be explaining it in terms of hematopoiesis or natriuresis or whatever? Maybe, but I'm not going to get anywhere with jargon.


I don't really understand the analogy. As what we are dealing with here is a particular claim about the nature of the brain and its capacities. "teach Wittgenstein" is so over-general that it would be misleading to compare teaching an entire philosopher's System (or "therapeutic system," in Wittgenstein's case) to the nature of the brain. Whether or not one can, I highly doubt, really gets us any closer to the heart of our contention.

I really would like to attack this notion of mere "analogy." "Anthropomorphization" is a particular kind of analogy. So I should not be taken, by my attack on anthropomorphization, to be arguing that analogy is useless. Indeed, as anyone should know who has come across Wittgenstein, a majority of his better arguments were analogy. Thus, I struggle to see how, as a self-proclaimed Wittgensteinian, you might even consider that I wish to attack analogy as a viable source for explanation. I certainly hope you have not ventured to consider those thoughts.

But back to my main point. You have made the explicit move which suggests that "analogy(anthropomorphization)" and "analogy(physical process)" are equally innocuous explanatory devices. This I most certainly disagree with, if not simply to point out that they are two different classes of analogy and we should inspect both classes as distinguished from one another. This is the reason we why we (informal) fallacies identified as "false analogy," to help us distinction the poor and irrelevant analogies from to proper, relevant and helpful ones. Saying that a brain has a believe is not just some analogical device that is just as innocuous as saying "gravity works like this."

Quote:
Feel free to look up my most recent publication (Lantos et al, Parasitology (2009), 136, 1-9), I'm the first author. How readable is this to someone who is not a biologist? I don't know how much teaching experience you have, but peer-to-peer communications are NOT going to work for a lay audience or even an audience of students. And a lot of popularizations we have (like Stephen Hawking's books, or Stephen Jay Gould's books) are specifically written (and therefore successful!) because of their accessibility. If one ever needs more detail about the science, the original references are always there.


Again, we can make a distinction. Those who are actually making claims about the brain that are not to be taken as "mere analogy" and those who are. The writers you have cited perhaps do frequently use explicit analogy and wish to be taken as doing so. For one thing, it must be noted that neuroscientific research isn't just like theoretical physics or biology. The object of study, presumably, is belief. Exploration of the galaxy, the "beginning of time," black holes, dark energy, microorganisms, viruses, etc are not like beliefs. So it should not be surprising, I think, that when a neuroscientist decides to treat beliefs, thoughts, and such as if they were just like any other physical phenomena, that the debate take a somewhat different turn.

Again, it is fully understood that analogy is employed, and it rightfully should be. But this is not analogy. Further, and this is why I make such a point in the last paragraph, I think it unfair to bring in the areas of (micro)biology and (theoretical) physics when we're talking about a particular scientific discourse that is very much different from these.

Now, to explicitly address your claim to the validity of "analogy" in explanation. It should be point out that I am being charitable here, though I need not be. Again, by addressing analogical reasoning, I need not be taken to be addressing anthropomorphization (for it is a particular, and thus more specific, class of analogy). We could easily, I think, drop anthroporphization while still legitimately using analogy in explanation, given the right audience and context.

It must be admitted (as it cannot be argued otherwise) that Crick, for instance, is using the term "believe" and ascribing that predicate to the brain. This much is clear. He is using the letters "B," "E," "L," and so on. He is using this term, belief, in its traditional grammatical way (as is evidenced by the sentence). Whether or not he means something "highly technical" or "has references which support its use" is the argument I now address. Thus, since he is using the concept of belief (as is obvious by the quote), now the question is what does he mean by it. But stronger: The question is: What could he mean by it? I would like, as is clear by this stronger question, resolve this matter by eliminating the possible argument that we'd need to read the whole of his text or a great majority of it, along with all its sources and references to get at what he means. By this stronger question, I wish to drive at this point: He, whether he knew it or not, mistakenly made a nonsensical claim.

So your defense amounts to (with charitable interpretation) roughly these:

(1) He's using a technical term. (Belief is really Belief*)
(2) He's using analogy (analogical extension of the predicate "to believe").
(3) He's speaking metaphorically.

Counters:

(1) It's clear that he is not using a technical term, for he would have chosen a different term. Nothing about what is quote suggests that "believe" is being used in a "highly technical sense." "Belief," under its conventional sense presupposes interpretation, past experience and information (that which is the outcome and mediating constitutive element to the act of interpretation). Should Crick or any other author use "belief" in some technical sense (belief*), then that author would have to provide definitions of interpretation*, past experience*, information* and many other concepts which are embedded in the concept of "belief." Not only would the text be cumbersome, but it certainly would not cater to a "lay audience." For why would anyone wish to sit down to read a text which tells them that seeing* is in fact what you're really doing, while believing*, feeling*, expressing*, uttering*, etc, etc, etc are all the right concepts. Essentially, you'd be sitting down to read a radical re-write of your entire vocabulary. Even if you disagree with me, I implore you to at least stop and reflect on this argument. It is, as is obvious, the one I raised previously (about concepts being chained together or deeply interconnected).

(2) In my reading, I have not come to the conclusion that mere analogy is being used. As with the technical sense claim (1), I take these authors to be not introducing bloated terms or introducing analogy. They intend for these terms to be taken in their conventional meaning, but simply ascribed to the wrong (the brain) category or thing. For instance, a hemisphere of the brain is "a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing and emoting, all at a characteristically human level" (Sperry, "Lateral specialization in the surgically separated hemispheres").

Again, I'd like point this out: Neuroscience (and thus neuroscientific literature) should not be uncritically and easily placed in the same safe zone as, say, (non-theoretical) physics, biology, chemistry, etc. If anything, neuroscientific and cognitive neuroscientific literature, I think, resembles the more "wacky" theoretical literature of physics, the highly contentious stuff (M-theory, string theory, multiverse theory, etc). If you disagree, that's fine. But at least I'd like you to try on the spectacles I'm wearing. See if you sit easy when you come across statements which claim that the acquisition of knowledge is a "primordial function of the brain" (Semir Zeki, "Abstraction and idealism").

Surely, the immediate tug is to think, "Ah, he must mean knowledge*, not knowledge like what I'm concerned with, like learning about the U.S. government's history." Does he really mean knowledge*, and not knowledge in its conventional sense? And if he does not mean knowledge*, then why should we even care about his claims. As with any scientific jargon, if you tell me, "You don't believe* that" (which is just to say, on hypothesis that eliminative materialism is true, my brain is not believing that) then I might simply ignore you. In a less extreme example, as with any jargon, we fail to see its relevance in practical cases. I find nothing interesting to learn about Ghandi's beliefs* or his thoughts*.

Further, and to my point, if merely analogical extensions, then the boundaries (the semantic field) of the term so extended must also be justified along with the combinatorial possibilities that term plays in various other semantic contexts.

(3) "A map is not a territory" (Korzybski). A great slogan, I think, and it is relevant to this metaphor claim. A human's believing is not some charting of neuron states. We might find correlations (as this is what maps are intended to do; they're approximations, in principle). So if we say that a brain's belief are metaphorical, then we might have to apply the same reasoning here: the brain has some level of activity going on that simply is not charted, in principle, by its beliefs. But the whole premise of the neuroscientific endeavor is to chart the activity of the brain and make inductive generalizations. So on its face it is senseless and circular to talk about the brain's beliefs. And if he means beliefs*, then even worse.

Of course, this is very long, and I hope you do not think I simply went off on a tangent. His statement, and many other statements based upon (in philosophical) and found in neuroscientific literature, have this quality of ascribing predicates to the brain. But more to your point: What if he defined his terms in a particular way? Well, I should hope (1)-(3) respond to that. Why should we care about jargon? How can he account for the interconnectedness of concepts like belief? Calling it metaphor simply admits the error and explains nothing.
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 05:11 pm
@Mr Fight the Power,
Quote:
I am trying to grasp how my utterance of the statement "I felt your rudeness yesterday" qualifies as criterial evidence that I possessed the quality of feeling.


Surely utterances do not provide "the quality of feeling" itself. The statement indicates to me that you felt something (because you've said it). But no one would ever claim that mere words contain the information of that quality of feeling. The verbal behavior (thus the content of your utterance) function as criterial evidence that you're feeling in such-and-such a way (that way described or given by your words).

If you're asking about communicating qualia between two human agents, then you're going to have to tell me what you understand qualia to be. More than likely, I am going to deny your definition and reject the concept of qualia, thus given, as senseless. But I am open to a new and innovative definition, should you provide one.

Quote:
Secondly, if this does provide criterial evidence of feeling, how does it identify just who or what is feeling? How does it address the problem at hand?


I simply understand that the speaker of the sentence is the person making the utterance. I understand which object "I" refers to, namely you. And you understand this as well in making your utterance.

Quote:
Perhaps I just don't understand your argument, but I cannot make sense of how you can extend behavior past inductive evidence, as I see no necessary link between behavior and consciousness.


For one, I am not claiming a "necessary link." I thought it clear that when I say "criteria" I do not mean a "definition." "Criteria" and "definition" do not mean the same thing. So I am not quite sure what claim you take me to be making. I do not fully understand what "extend behavior past inductive evidence" is supposed to mean. My claim is that inductive evidence and criterial evidence are different. We ascribe psychological predicates to things based on criterial evidence because behavior is partly constitutive to the meaning of the predicate in question.

For example: I ascribe anger to X because certain behavioral criteria are met that satisfies what it means to be angry.
Aedes
 
  1  
Reply Fri 27 Feb, 2009 07:39 pm
@nerdfiles,
nerdfiles;51073 wrote:
This is a philosophy forum, and this my opponent is a philosophical school/thesis: eliminative materialism. Their claim is explicit that our "folk psychological predicates" do not exist. Their justification is scientific, in that they claim "beliefs" will be eliminated like how witches and phlogiston were shown not to exist.
Where does Crick fit into this "They" of yours? What makes you think that he's speaking for a philosophical school as opposed to just being careless?

nerdfiles;51073 wrote:
My claim is that Crick is advancing a claim to push the boundaries of what counts as scientific.
And my claim is that based on this selected quote alone you have no way of gleaning his thoughts about what counts as scientific. Maybe he's being careless with language, and you're being careless by overinterpreting it as if it's emblematic of his beliefs or his peers.

nerdfiles;51073 wrote:
I explicitly and absolutely deny that he is simply writing shorthand or writing for a particular audience. He is making a claim that is to pass off as valid science, just as the eliminative materialists do.
No, he is distilling valid science so that it's accessible to laypeople, and if the only critique you can levy is that it contains the "mereological fallacy", then maybe he'll take it if it gets his point understood.

nerdfiles;51073 wrote:
Now, perhaps, we have a fundamental disagreement about explanation. "To simplify," under my understanding, means to "explain in a simpler way."
Yes... in this case in a way that personifies the brain rather than invoking a torrent of mechanistic and cellular jargon in the passive voice.

nerdfiles;51073 wrote:
"anthropomorphization" is not to explain anything, but only to mislead. To anthropomorphize is not to explain.
You're right, it's a conspiracy to mislead everyone. One of my former mentors at HSPH, Andy Spielman, was one of the world's leading authorities on mosquito-borne diseases, and he had hundreds of scientific publications. He wrote a pop book called Mosquito for a lay audience. He wanted to not only present to his audience the rote science of mosquito biology, but also infect them with a bit of the passion he had for these interesting and complex insects. Anthropomorphism was an extremely effective strategy in his book. And it doesn't take away from the way he presented novel research in research publications.

nerdfiles;51073 wrote:
Description in anthropomorphic terms turns into nothing but fanciful metaphor. Explanation in anthropomorphic terms explains nothing at all.
You can explain through description -- isn't that a reasonably pithy way to summarize science?

nerdfiles;51073 wrote:
I don't really understand the analogy. As what we are dealing with here is a particular claim about the nature of the brain and its capacities. "teach Wittgenstein" is so over-general that it would be misleading to compare teaching an entire philosopher's System
Oh come on, you understand perfectly well what I mean, do I really have to go through the effort to choose one assertion or another for you to get my point? Fine -- Wittgenstein in an argument with Bertrand Russell rejected the statement "There is no hippopotamus in this room" because it was not logically necessary. Explain that to 7th graders. Or pick whatever else -- explain phenomenology to a 7th grader, or explain Kant's critique of pure reason to a 7th grader. How hard does this have to be? I can explain aseptic meningitis to a 7th grader, but it's not using the same terminology as I'd use if I were presenting the subject at a meeting of peers. Since most lay books and newspapers are written at a ~ 6th grade reading level, it's a reasonable request on my part that you consider this.

nerdfiles;51073 wrote:
I struggle to see how, as a self-proclaimed Wittgensteinian, you might even consider that I wish to attack analogy as a viable source for explanation. I certainly hope you have not ventured to consider those thoughts.
It doesn't matter what you proclaim yourself to be. Whether you call yourself a Wittgensteinian or not, all that that really means is that you're a bright and articulate and well educated guy living a century later who admires Wittgenstein above all others; but you're no more a Wittgensteinian than I am a pre-Socratic. (Well, since I'm not a philosopher, maybe you are, but that's not the point).

I also am not contending that you're attacking analogy. I'm contending that you're taking quotes out of context, critiquing them as if they're in a different context, and then generalizing your critique to a whole field as if these quotes are representative.

nerdfiles;51073 wrote:
You have made the explicit move which suggests that "analogy(anthropomorphization)" and "analogy(physical process)" are equally innocuous explanatory devices. This I most certainly disagree with, if not simply to point out that they are two different classes of analogy and we should inspect both classes as distinguished from one another.
Just depends what you're trying to explain and to whom. If I tell someone with bad reflux that his esophagus hates it when he drinks coffee, then I'm using this mereological fallacy for its rhetorical effect and not for its explicative accuracy.

nerdfiles;51073 wrote:
Whether or not he means something "highly technical" or "has references which support its use" is the argument I now address... What could he mean by it? I would like, as is clear by this stronger question, resolve this matter by eliminating the possible argument that we'd need to read the whole of his text or a great majority of it, along with all its sources and references to get at what he means.
I'm sorry, but that's just being lazy. If you want to know what he means, then do the legwork to read what he's written rather than going through acrobatics in your own mind. I detect that you have some latent disdain for cognitive science, and thus feel like you can't be bothered to do it.

nerdfiles;51073 wrote:
By this stronger question, I wish to drive at this point: He, whether he knew it or not, mistakenly made a nonsensical claim.
And if you can't be bothered to look through his other writing, then you'll have no way of knowing.

nerdfiles;51073 wrote:
So your defense amounts to (with charitable interpretation) roughly these:

(1) He's using a technical term. (Belief is really Belief*)
(2) He's using analogy (analogical extension of the predicate "to believe").
(3) He's speaking metaphorically.
No, my contention is that 1) he is being pragmatic for the purposes of accessibility to a lay audience and is thus willing to defer complete semantic precision, and 2) writings of his that are meant to present novel research findings are unlikely to contain a similar fallacy
0 Replies
 
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 08:13 pm
@nerdfiles,
I've quoted others, mind you.

Quote:
Just depends what you're trying to explain and to whom. If I tell someone with bad reflux that his esophagus hates it when he drinks coffee, then I'm using this mereological fallacy for its rhetorical effect and not for its explicative accuracy.


If you were claiming that the esophagus really had all the psychological capacities that humans do, you'd be committing the mereological fallacy.

Committing is an important term here. By using such a explanatory device, are you thereby committing yourself to the semantic displacement? Of saying, "Yes, the esophagus can hate, love, feel displeased." And so on. If you were to do this, you would be committing the fallacy.

But this is not--I repeat--not what Crick and the various other authors I've quoted are saying. (Have you noticed? I have not merely quoted Crick.) They're not just saying "the brain believes* ..." and "neurons have knowledge*." I use * because in your example, "hate" would be a semantic deviant. It would not have its traditional sense because you would not be able to say "the esophagus has hated for X about of time" or "the esophagus told me about its hatred for." I point this out because it is important that you understand that you cannot in principle say these things. You cannot say them to mean anything genuine about the world. Thus, you'd be saying "the esophagus hates* ..." and you would be non-commital about to its being a legitimate object that can hate. You'd simply say, "No, I was being metaphorical." And that would be the end of it.

It seems that we are talking past each other, as I have apparently failed to make the distinction clear to you. I'm not really sure if I can, seeing as how we've come this far and I feel I have made fairly explicit attempts to make said distinction overt.

The explanatory device you have laid out in what I have quoted of you is absolutely the furthest from my target. Since you claim that it is an instance of the "mereological fallacy," though I have just claimed and have repeatedly claimed that it is not, I am more or less at a loss. Your example, simply put, is not an instance of the mereological fallacy.

Quote:
I'm sorry, but that's just being lazy. If you want to know what he means, then do the legwork to read what he's written rather than going through acrobatics in your own mind. I detect that you have some latent disdain for cognitive science, and thus feel like you can't be bothered to do it.


You misunderstand, and it is warranted for I was not clear in my claim. What I am claiming is that it is not necessary to cover the mass of his text or corpus to see that he could not possibly mean what he says he means.

Simply put, here's an analogy. Suppose you are an atheist. It is not because we have the luxury of "the progress of Western civilization" that we understand God does not exist. Would you simply say "Ah yes, I'm an atheist because that's just the way it worked out in my society" supposing that you were familiar with the arguments against the existence of God? Certainly not. If you were a philosophical atheist, you might say, "Yes, I am a product of Western civilization, but that does not change why I choose to not believe in God. I choose to not believe because God could not possibly exist. I do not have to read the Scripture in order to tell you this."

What you are countering me with is essentially the "Read the Scripture" argument for the negative thesis in defense of God's existence, the one crafty theists employ. They argue, "Well, you haven't read the Scripture. So your argument don't work. (You're lazy, ignorant, biased, etc.)"

But this is clearly a bad argumentative move. I do not need to read the Scripture to tell you that God, under a particular conception, does not exist. I do not need to read all of Crick (how much is sufficient) to tell you that "The brain believes X and interprets Y" is ungrammatical in any context, and is senseless.

The only move one could make is to say that "believes" in that sentence means something utterly different than what it conventionally means.

I guess since "the pragmatic" rules all, my arguments will essentially be lost to you. But I'll just say again: I have mentioned others. Also, I am not saying he is being "imprecise"; I am saying he is making senseless claims. He is not "more or less" grasping the right explanation. His language is not imprecise, it is senseless. Thus, people will "more or less" "understand" a nonsense.

Quote:
It doesn't matter what you proclaim yourself to be. Whether you call yourself a Wittgensteinian or not, all that that really means is that you're a bright and articulate and well educated guy living a century later who admires Wittgenstein above all others; but you're no more a Wittgensteinian than I am a pre-Socratic. (Well, since I'm not a philosopher, maybe you are, but that's not the point).

I also am not contending that you're attacking analogy. I'm contending that you're taking quotes out of context, critiquing them as if they're in a different context, and then generalizing your critique to a whole field as if these quotes are representative.


It should be clear why I bothered to state that, for I took you to be suggesting that I was attacking analogy generally. I tried to make it clear what "it really means (to be a Wittgensteinian in this particular argument)," insofar as I could not be attacking analogy generally.

I was trying to relay to you the nature of my approach, not tell you something that holds no weight whatsoever. I was trying to make a point by claiming to be a Wittgensteinian. Essentially, I respect analogy and I find it useful in understanding and explaining things. Again, I took you to be suggesting that I was attacking analogical explanation generally because and, this is important, as I've said (in my previous post), you and I seem to be talking past each other. I have failed to make it clear to you what is a "mereological fallacy" case and what is a mere "analogical explanation" case.

But I am not attacking analogy generally, though you seem to be setting up "general analogies" whereas I am attacking a particular brand of analogy (those related to the mereological fallacy). It is your repeated setup of "general analogies" that leave me quite confused and wanting to pull any card I can to get you to understand that I am not attacking analogies general.

For instance, this one is "general": If I tell someone with bad reflux that his esophagus hates it when he drinks coffee, then I'm using this mereological fallacy for its rhetorical effect and not for its explicative accuracy.

And thus, it is not the kind of analogical extension that I attack. (Though "analogical extension" is only one way in which the mereological fallacy might be committed, the others being those I've pointed out: "technical terms" and "metaphor.")

Btw, P.M.S. Hacker (the Wittgensteinian) raises a lot of hell, amongst the greats of contemporary philosophy (Daniel D., John Searle, most philosophers of science/language) and the "big" neuroscientists (Damasio, Crick, Blakemore, Gregory, et al) for exactly those arguments I'm providing here.

I expect no less than frustration, bitterness and heat, for this argument is the crux of contemporary cognitive neuroscience and neuroscientific psychology.
Aedes
 
  1  
Reply Fri 27 Feb, 2009 09:10 pm
@nerdfiles,
You write very long posts, I'll respond to what I can.

nerdfiles;51085 wrote:
I've quoted others, mind you.
I know, but my main contentions has been that you need to know what someone means when you are selecting a quote, in order to judge their thought process (let alone that of their whole scientific field), and if you think these posts are long now, imagine if we explored all of your quotes?

nerdfiles;51085 wrote:
They're not just saying "the brain believes* ..." and "neurons have knowledge*. I point this out because it is important that you understand that you cannot in principle say these things.
Such principles assume a rigidity and precision to language that it doesn't have in practice. Unless you can convince the individual's you're quoting to express these passages in formal logic, rather than stylized prose, then you're forced to admit that their word choice is in part influenced by their rhetorical and aesthetic needs.

nerdfiles;51085 wrote:
It seems that we are talking past each other, as I have apparently failed to make the distinction clear to you.
I think I get it. I'm just, perhaps, reading into an agenda you have about neuroscience that is making your own interpretation a bit less fair and analytical than it should be. You could approach this semi-scientifically. Your null hypothesis would be that the mereological fallacy is NOT part of neuroscientific thinking, but apparent examples of it are used for literary purposes. And your experimental hypothesis would be that the mereological fallacy is actually inseparable from the scientific assertions of this field.

nerdfiles;51085 wrote:
What I am claiming is that it is not necessary to cover the mass of his text or corpus to see that he could not possibly mean what he says he means.
Ok -- but aren't you curious as to what he does mean and why he chooses to express himself differently in this one passage?

nerdfiles;51085 wrote:
this is clearly a bad argumentative move. I do not need to read all of Crick (how much is sufficient) to tell you that "The brain believes X and interprets Y" is ungrammatical in any context, and is senseless.
But you DO need to read him to understand what he actually means, and if this passage of his is a scientific expression or merely a poetic one.

nerdfiles;51085 wrote:
The only move one could make is to say that "believes" in that sentence means something utterly different than what it conventionally means.
You mean, as if it could be used figuratively?

nerdfiles;51085 wrote:
I guess since "the pragmatic" rules all, my arguments will essentially be lost to you.
Well, if "There is no hippopotamus in this room" is a false statement to you as it was to Ludwig, then pragmatism isn't one of your highest prioirities. Very Happy

nerdfiles;51085 wrote:
I am saying he is making senseless claims.
And I'm saying that you haven't demonstrated that it's actually a claim at all, as opposed to a dumbed down introduction to a topic. If you want to know what his claims are, go and look up his claims.
0 Replies
 
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 09:20 pm
@nerdfiles,
Quote:
Well, if "There is no hippopotamus in this room" is a false statement to you as it was to Ludwig, then pragmatism isn't one of your highest prioirities.


Well, I think the issue of "pragmatism" doesn't come in because I think it false. I think it indeterminate,, not false. The pragmatist sees no utility in that kind of "skepticism," though I'd call it logical sincerity.
0 Replies
 
Aedes
 
  1  
Reply Fri 27 Feb, 2009 09:38 pm
@nerdfiles,
I'm just exaggerating, of course, but I think pragmatism is a pretty good starting point for philosophy -- if we are to accept that philosophy needs to be somewhat grounded in the way that real people think and real people make decisions.

I'm not all that literate in cognitive science, but I am an admirer of some experimental philosophy that teams up with neuroscience. I met Josh Knobe about a year ago and I've read a lot of his writing, and for someone quite young I think his work is incredibly thought-provoking.
0 Replies
 
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 10:19 pm
@nerdfiles,
I don't think your conflation of "pragmatism" and "what real people think" (my emphasis) is justified--or at least it requires some serious clarification.

Leaving Crick aside, though I think he leans to the eliminative materialist bunch (he is without question epiphenomenalist--and he bases his position on Damasio's research), if only implicitly, your statement stands in need of serious elucidation of what "real people think" or "real people" or just plain "real" is supposed to mean.

You are saying that a pragmatist position/methodology/model starts us from the "real" or lends us closer to it or we take it more so into account as a principle for research.

A large part of my accepting that "the mereological fallacy" model is a valid criticism to make of neuroscientific writings and philosophers like John Searle and Daniel Dennett is that it helps me see that these thinkers (along with eliminative materialists, physicalists, epiphenomenalists, who are much more explicit in claiming that the brain does all the work) are attempting to rob the layperson of his or her use of conventional psychological predicates. Many of these neuroscientists and philosophers who justify their literature in a philosophical way argue that the brain does all the work, the brain has feelings, interprets, has beliefs, perceives, sets goals, plans, has knowledge (and even further than the neurons themselves have knowledge), etc.

But my claim is that this is the point of divergence. Do real people walk around with the belief that their neurons have knowledge or, more importantly, that their parts have knowledge? That their brain has beliefs? How about religious people? Will they say, "Ah yes, it is not I who believes in God, but it is my brain."?

I very often fear the easy conflation of "pragmatism" with "the folk." I think it a rather loose sense of "pragmatism" which really has no place in philosophical discussion. But as always, it depends on what you mean by "pragmatism" and "real."
Aedes
 
  1  
Reply Fri 27 Feb, 2009 10:27 pm
@nerdfiles,
nerdfiles;51099 wrote:
You are saying that a pragmatist position/methodology/model starts us from the "real" or lends us closer to it or we take it more so into account as a principle for research.
I think there is no basis to critique the sensibility of people's word choices if you haven't come to terms with how people actually use and interpret words.

nerdfiles;51099 wrote:
Many of these neuroscientists and philosophers who justify their literature in a philosophical way argue that the brain does all the work, the brain has feelings, interprets, has beliefs, perceives, sets goals, plans, has knowledge (and even further than the neurons themselves have knowledge), etc.
You could assert that the brain does all the work just as easily without using such attributions. The mechanism is incidental, but there is certainly a unique biological process that produces my personal sensation of anger or joy or whatever. I can say that without saying that the brain itself is angered or joyful. So there -- neuroscience would be entirely redeemed for you if they all said that, right?

nerdfiles;51099 wrote:
Do real people walk around with the belief that their neurons have knowledge or, more importantly, that their parts have knowledge? That their brain has beliefs? How about religious people? Will they say, "Ah yes, it is not I who believes in God, but it is my brain."?
I don't know, have you asked them?

nerdfiles;51099 wrote:
I very often fear the easy conflation of "pragmatism" with "the folk."
Because the philosopher at his armchair is right but everyone else is wrong?
nerdfiles
 
  1  
Reply Fri 27 Feb, 2009 11:15 pm
@Aedes,
Quote:
You could assert that the brain does all the work just as easily without using such attributions. The mechanism is incidental, but there is certainly a unique biological process that produces my personal sensation of anger or joy or whatever. I can say that without saying that the brain itself is angered or joyful. So there -- neuroscience would be entirely redeemed for you if they all said that, right?


Many of them, if not all, are attempting to explain consciousness. If they ceased to ascribe psychological predicates to the brain, then yes, they wouldn't commit the fallacy.

The problem is what "all the work" consists of, and the inter-disciplinary confusions between cognitive psychology, psychology generally and neuroscience. Let's cut back 60 years. Logical positivism would be perfectly all right if Carnap et al didn't talk about non-analytic and non-synthetic statements.

Yeah, it would be bliss if the logical positivists, to that end, didn't become a school of thought at all. But their chief goal was to explain something in a particular way (reduction to the physical). Neuroscientists are trying to explain things like conscious experience, emotion, volition, voluntary movement, cogitative powers, cognitive powers, mental states, etc, etc. They don't have to be reductive, but they nevertheless are. It's likely some implicit acceptance of a peculiar form of Occam's Razor.

Quote:
I don't know, have you asked them?


You brought up "real people." I really should be asking you that. Asking that would be the logical "next step" after asking you what "real people" is supposed to mean, which is what I did.

Quote:
Because the philosopher at his armchair is right but everyone else is wrong?


...?

Because I don't know what you mean by "pragmatism."

Perhaps you don't understand... I am arguing from a position of ordinary language philosophy.

I am attacking someone who I take to be employing idealized terms as an attempt to suggest that everyone else is wrong.

My friend, I am arguing against eliminative materialism, not for itto other human beings. They think it only makes sense to say the brain is angry, the brain is interpreting the world, the brain is setting goals.
0 Replies
 
Mr Fight the Power
 
  1  
Reply Mon 2 Mar, 2009 06:27 am
@nerdfiles,
nerdfiles wrote:
The statement indicates to me that you felt something (because you've said it).


Yes, I understand your argument to a decent extent.

"because you've said it" is the abbreviated version of the answer I was looking for. I don't understand how you know I felt some way because I said I did.

As far as I can tell there is still an asymmetry, and I don't see how you have any way of knowing that the speaker's understanding of "angry" approximates anything like your understanding of "angry".

Quote:
I simply understand that the speaker of the sentence is the person making the utterance. I understand which object "I" refers to, namely you. And you understand this as well in making your utterance.


My brain is a part of me. How does my statement "I think" eliminate the possibility that only the physical content of my brain thinks.

Quote:
For one, I am not claiming a "necessary link." I thought it clear that when I say "criteria" I do not mean a "definition." "Criteria" and "definition" do not mean the same thing. So I am not quite sure what claim you take me to be making. I do not fully understand what "extend behavior past inductive evidence" is supposed to mean. My claim is that inductive evidence and criterial evidence are different. We ascribe psychological predicates to things based on criterial evidence because behavior is partly constitutive to the meaning of the predicate in question.

For example: I ascribe anger to X because certain behavioral criteria are met that satisfies what it means to be angry.


I am fairly sure I can differentiate between inductive evidence and criterial evidence.

To me, "I'm angry" is only criterial evidence that there is another, but it is only through inductive reasoning that I accept him to be similar to me.
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
Copyright © 2025 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 05/10/2025 at 09:13:10