2
   

The Problem of Consciousness

 
 
salima
 
  1  
Reply Sat 5 Sep, 2009 05:35 pm
@jeeprs,
jeeprs;88363 wrote:
Ah, but can we? I think this misses the point. The bytes might exist on the hard drive - it is after all just binary code - but the point I was making is that these bytes don't mean anything at all until they are interpreted. Remember, we are talking of the nature of conscious experience here; not just the transfer of information. I think you would like to believe that these are somehow equivalent; but I don't think they are. To say that they are equivalent is to assume what you are setting out to prove.

The bytes on the hard drive do not convey meaning until they are interpreted, first by the operating system, and then by the human subject. And both these operations are external to the binary code that 'represents' the text. In other words, the meaning of the message, which is essential to its nature, is not inherent in the binary code as such. The code is, after all, just zeros and ones. And without the operating system and the subject reading it, nothing meaningful can be said to exist on the hard drive.

So the analogy of 'translation' is not appropriate. Translation between Japanese and English assumes that the hearers are already able to interpret and understand the meaning of the phrase that is being translated. But what we are considering here is whether the actual elements of conscious experience can somehow be reproduced in electrochemical form, using the storage of bits on a computer as an analogy for this. To say, then, that 'conscious experience' can be regarded as 'neural activity' or 'proteins' is not a matter 'translation' because this already assumes that what we are discussing can be represented in symbolic form or, in fact, reduced to 'information'. So you are talking of 'transformation' rather than 'translation'. The conception is that of the transformation of one type of thing - namely 'interpretation of meaning' - into another - namely 'proteins encoded in cells'. But these two kinds of thing exist on a different level. By saying they are translatable, you are assuming equivalence. So, as noted above, you are assuming what you are setting out to prove.



This seems to be saying that some amount - perhaps a considerable amount - of neural activity does not qualify as 'conscious'. Perhaps this might correspond to that activity which psychologists have designated 'unconscious' or 'subconscious'? The key question then is, if all of this 'unconscious' and 'subconscious' component is going on, how do you model this part of the activity? Even if you can map a representation of an image as a neural pattern, what of all the distributed activity that is happening elsewhere in the brain, and even in the rest of the body, for that matter, which may not be an explicit aspect of the particular mental event in question - consciousness may not be aware of it - but which nevertheless may be an implicit component of the mental operation?

In other words, where in the nuerological model of consciousness is the unconscious and subconscious represented?





wish i had said that!
and you thought you werent contributing anything much to this thread?
amazing!
alcaz0r
 
  1  
Reply Sat 5 Sep, 2009 06:31 pm
@salima,
jeeprs, I think you raise some important questions.

I wonder though, if we can't consider these questions with more precision, as long as we form our ideas with enough care and attention.

Let's do a thought experiment! :Glasses:


Let us start with an impression:
  • Light bounces off of a chair, some of these rays entire our eye, striking the back of our eyeballs in a specific pattern.
  • This specific pattern is translated into neural impulses. As you correctly point out, this pattern of impulses is, considered on its own merit, arbitrary.
  • Since it is arbitrary, let us represent this pattern of impulses as [Y2xvc2U=,Y2hhaXI=] For this example, this represents the image of a chair that is close to us.
Consider what happens next. This pattern proceeds to the brain, where it is stored as a memory / idea.

Now let us proceed to what happens when we receive an impression of a different chair. Let's say this one is far away. The pattern of neural impulses incited in the eye are similar in some ways and different in others from the earlier impression we received from the close chair. Let's say it is [ZmFyDQo=,Y2hhaXI=]

So, how do we reconcile the impression of the far away chair with the memory of the chair which we saw close to us? By recognizing the similarities in the patterns of brain activity they elicit.


Therefore, I cannot imagine that anything else is necessary to give meaning to our arbitary neural activity than:
  1. That similar sensations elicit similar patterns of brain activity.
  2. That these patterns of brain activity are stored in memory.
  3. That current input can be compared to past input, and similarities recognized.
---------------------------------------------------------

I really had no idea how this would turn out when I started typing. The conclusion I draw is that we may never be able to truly decode neural impulses based on first principles (I think you will agree to this part jeeprs.) The machines we use to interpret a person's brain activity will have to be trained to recognize the specific patterns of brain activity that is elicited in a person's brain as they receive certain impressions. Once that is done, we will be able to interpret the patterns of brain activity the machine has been trained to interpret.

The other conclusion I draw is that this doesn't really have much to do with consciousness. The internal translation that occurs, that gives meaning to our brain activity, is a matter of pattern recognition. The meaning we associate with our present sensory inputs is derived from the similarities they bear to past sensory inputs. I don't think that we are aware of this process so I'm not sure it is important to consciousness, though I could be convinced otherwise.
0 Replies
 
jeeprs
 
  1  
Reply Sat 5 Sep, 2009 07:04 pm
@BrightNoon,
Where I am actually trying to get to in all of this is the actually rather modest proposal that the neuro-scientific analysis of consciousness is but one of number of perspectives and not 'the final say'.

Let's see what else comes up.

---------- Post added 09-06-2009 at 11:15 AM ----------

alcaz0r;88387 wrote:


Therefore, I cannot imagine that anything else is necessary to give meaning to our arbitary neural activity than:
  1. That similar sensations elicit similar patterns of brain activity.
  2. That these patterns of brain activity are stored in memory.
  3. That current input can be compared to past input, and similarities recognized.
---------------------------------------------------------

There is of course one other crucial factor: your ability to conceptualise it in just this way.

There will always be a recursion issue in these arguments.
0 Replies
 
KaseiJin
 
  1  
Reply Sat 5 Sep, 2009 09:04 pm
@jeeprs,
First of all, let me again, emphasize (because I have honestly found it a bit more 'hindersome' than otherwise) that we may run into problems because of the analogy itself, rather than the details of what we are trying to look for in understanding the problem of consciousness.

In that we have started off with an analogy of information being laid onto a computer's hard drive, I guess we'll have to not only stick with that, but to understand that as well. . .so, here goes:

[indent]
jeeprs;87989 wrote:
I am writing on a computer. . . . You could also probably zoom in on the hard drive and find the actual bytes which represent this particular text on the drive of a server. But then, if they are not interpreted by the operating system, displayed on a screen, and read by another human, how can they actually said to mean anything? (bold mine, and PLEASE do pay attention to the sense it draws out)


KaseiJin;88180 wrote:

What we can say has essentially occured, is a translation of information into another format, which can be translated back out of that format by an understander of it, into the original format once again. . .
[/indent]



jeeprs;88363 wrote:
Ah, but can we? I think this misses the point. The bytes might exist on the hard drive - it is after all just binary code - but the point I was making is that these bytes don’t mean anything at all until they are interpreted.


Here, I fully disagree. The essential event is one of translation, and there is not the circumstance that the 'translated-into-format' has no meaning. There is a chance, perhaps, that not enough intent is being attached to the idea of 'information,' maybe? Putting aside, for the moment, the fact that the system was set up to physically lay binary system 'event points' (for lack of technical term) [which, actually, would automatically and logically demand that there would be meaning in any arrangment of these ['event points,' because that's the whole idea behind the invention !], we can most clearly see that by writing on a computer you have laid down an arrangement of 'event points' onto the hard drive. This, we can put into the factual basket.

Now, in that what you had written had contained information (which would be the case even if what you had written had been gibberish, actually, in that you would have had to input letters, and each letter is a piece of information) how can you say that what you have laid down in event points is meaningless? Most obviously, you are not taking into account the fact that you have actually put information (and we can take this to be a correct English sentence [since we are talking of writing on a computer]) into a different form. For example, let's say you had typed (written) the following sentence in English, on your keyboard; and by extension, onto the hard drive of your computer:

[indent][indent][indent][indent]There are about 106 shopping days left until X-mas.[/indent][/indent][/indent][/indent]

By having done so, you laid down a specific pattern of event points. This is what we mean by translation, and there is no error in the use of that term towards what has happened. The bytes on the hard disk are not meaningless, even if left untranslated back out again, because they exactly equal the communication that you have written on your computer. This also, can be put in the factual basket.

The point which you said (end of line one of first paragraph) you had been working towards factualizing, as far as it relates to brain and consciousness, will have to either be put aside for a moment (since you have introduced an analogy which must first be understood properly), or we should discard the analogy.

The request I will make is that you once again consider[/u] the verity of logical understanding that by writting on a hard disk, the English sentence (as given above, for example) you have put that exact same information into a form which is not English exactly, but which is the exact quivalent--in other words, that exact English sentence carrying that exact information in simply a different form--and thus that arrangement of event points cannot be said to be meaningless, or, in no uncertain terms demonstrate how it could be said that such has no meaning--even if left uninterpreted.



There are other points in your #100 (nice number) which will only be arguable after this is clear. Also other points have come up which I intend to answer to, but one at a time, for the purpose of clear and precise development of argumentation !.
0 Replies
 
jeeprs
 
  1  
Reply Sun 6 Sep, 2009 03:43 am
@BrightNoon,
This is not a rebuttal of my argument, it is a reiteration of yours. Please have another look at the point about the distinction between 'translation' and 'transformation'.
Pathfinder
 
  1  
Reply Sun 6 Sep, 2009 05:25 am
@BrightNoon,
How can one ignore the fact that what is typed into a computer is the conscious intention of the user, and that the origin of this intent is not found in the bytes on the harddrive, but in the mind of the user alone.

The only reason the intent is revealed is because you are able to translate what the user intended to reveal, but what if is was written in code or a language that you could not understand? In this case you could put those bytes and biological reactions and functions under the scope all day long and never be able to figure out the truth behind the writing.

The conscious intent is not found in the brain/computer, what is found there is the aftermath of the consciousness. Whether or not you can decipher that depends on certain elements of revealtion.

---------- Post added 09-06-2009 at 06:33 AM ----------

Alcazor said :"The other conclusion I draw is that this doesn't really have much to do with consciousness. The internal translation that occurs, that gives meaning to our brain activity, is a matter of pattern recognition. The meaning we associate with our present sensory inputs is derived from the similarities they bear to past sensory inputs. I don't think that we are aware of this process so I'm not sure it is important to consciousness, though I could be convinced otherwise."

There are many degrees of consciousness and in some cases it involves emotional reactions and responses as well that invoke feelings that often seem uncontrollable and require consideration to be able to understand.
KaseiJin
 
  1  
Reply Sun 6 Sep, 2009 07:13 am
@jeeprs,
jeeprs;88421 wrote:
This is not a rebuttal of my argument, it is a reiteration of yours. Please have another look at the point about the distinction between 'translation' and 'transformation'.


I do not presently reason that to be correct; therefore would like to ask you to expound on your idea of the difference between translationand transformation. To translate is surely going to along the lines of 'carrying a thing across a different structure, whereas to transform, would surely mean to alter a thing into a different form--nothing carried across.

Also, just in case, please keep in mind that I am only looking at the typing on the computer event, here.
0 Replies
 
richrf
 
  1  
Reply Sun 6 Sep, 2009 07:44 am
@Pathfinder,
Pathfinder;88425 wrote:
How can one ignore the fact that what is typed into a computer is the conscious intention of the user, and that the origin of this intent is not found in the bytes on the harddrive, but in the mind of the user alone.


Hi Pathfinder,

I agree. Someone has to show the independent origin of Intent as well as Awareness. In addition one has to show the origin of Creativity as well as Memory (that transcends a single life such as instinct), and Desire to Live.

But everything is entangled, so there is no way ever to show this. Consciousness is entangled with whatever we are aware of.

If one cannot show how it all started independent of our own consciousness, then it is just another belief system and anyone can believe what they want. Some in a God, some in a flake of sand that suddenly, naturally, began to think, create, store memory of experiences, and have a desire to live. Personally, I think consciousness is the origin of it all - but the origin of consciousness?

Rich
0 Replies
 
jeeprs
 
  1  
Reply Sun 6 Sep, 2009 04:49 pm
@BrightNoon,
I think rather than repeat the earlier post, I will try and re-state it. We are discussing the extent to which neural patterns can be understood to represent objects in the real world. We referred to the study where experimenters were able to identify neural patterns as reactions to specific stimuli shown to monkeys. I am questioning to extent to which such neural patterns can be understood as 'really being' the objects to which they are said to correspond. I pointed out that the neural patterns in themselves are represented as displays of some kind of electrical signals on a monitor; and that without the interpretative ability of the experimenter, they cannot really be said to have any meaning. So the question is asked again: to what extent can you say these neural patterns 'really are' the things they are believed to represent?

The analogy of the computer system was used to show that even though the bits that represent a string of text are stored on a hard drive, the meaning of the text depends on the interpretation by the computer operating system, and again by the human reading the text. Again the question is raised, to what extent could the meaning of the text be said to be 'really there' in the absence of interpretation?

You are basically arguing that the meaning is implicit in, or embedded in the message; that the message can be 'translated' not only between languages, but also between different forms, i.e. binary code and written text. From this it can be inferred that the information which the message carries is somehow embedded in the message - which is why it can be translated. I am arguing that the meaning of all of it - the neural patterns, the text on the screen, the bits on the hard drive - is not implicit in the message or the patterns but is provided by the examining intelligence.

KaseiJin;88401 wrote:
For example, let's say you had typed (written) the following sentence in English, on your keyboard; and by extension, onto the hard drive of your computer:

[indent][indent][indent][indent]There are about 106 shopping days left until X-mas.[/indent][/indent][/indent][/indent]

By having done so, you laid down a specific pattern of event points. This is what we mean by translation, and there is no error in the use of that term towards what has happened. The bytes on the hard disk are not meaningless, even if left untranslated back out again, because they exactly equal the communication that you have written on your computer.


But this is only true because you are always and everywhere assuming the presence of the operating system to interpret this string, and the ability of the human to read it. The macaque monkey could not interpret it. It is meaningful within a certain realm, a given matrix of meaning. This may be a reasonable assumption, from the viewpoint of neuro-science, but it is an assumption nevertheless. And as we are trying to give an account of consciousness, the ground of all such assumptions needs to be questioned. This is why I say you are assuming what you set out to prove.

In this thread, you start off with the picture of a 'sentient brain' in a 'real world'. In this picture, the job of the brain is to assimilate impressions and sensations from the 'real world' and then reproduce them in electrochemical patterns. Given that this what you understand consciousness to be, the neurological model seems to provide a lot of validation. You are able to point to such studies such as Poggio and Hung and say that this 'proves' that the job of consciousness is to reproduce stimuli in terms of neural activity. Whereas it seems to me that all that it really proves are that stimuli produce patterns of reaction. It doesn't prove that the neural patterns are themselves the sole constituents of a conscious act. And I don't see how it ever can, because they too exist within a matrix, and have never been observed to exist outside that matrix.

Let us consider the 'enactivist' account of consciousness. The 'enactivist' perspective says that, in reality, these patterns of reaction are always situated within a body, and within an environment.

"Cognitive scientists standardly assume a division between independently existing ("pregiven") "external" objects, properties and events on the one hand and their "internal" representations in symbolic media in the mind/brain on the other. Varela et al. propose to replace this with an "enactive" account. The fundamental differences are encapsulated in answers to three questions:

[indent] Question 1: What is cognition?
Cognitivist Answer: Information processing as symbolic computation--rule-based manipulation of symbols.
Enactivist Answer: Enaction. A history of structural coupling that brings forth a world.

Question 2: How does it work?
Cognitivist Answer: Through any device that can support and manipulate discrete functional elements--the symbols. The system interacts only with the form of the symbols (their physical attributes), not their meaning.
Enactivist Answer: Through a network consisting of multiple levels of interconnected, sensorimotor subnetworks.

Question 3: How do I know when a cognitive system is functioning adequately?
Cognitivist Answer: When the symbols appropriately represent some aspect of the real world, and the information processing leads to a successful solution to the problem given to the system.
Enactivist Answer: When it becomes part of an ongoing existing world (as the young of every species do) or shapes a new one (as happens in evolutionary history). [/indent]


[Daniel Dennett, Review of F. Varela, E. Thompson and E. Rosch, The Embodied Mind, American Journal of Psychology, 106, 121-6, 1993.]

Hence my critique of the neuroscientific model of consciousness. The assumption seems to me to be that the neuroscientific model supersedes previous philosophies because we now know more than did traditional philosophy; we possess scientific knowledge. But it also depends on another set of assumptions: first that consciousness is an epiphenomenon of brain activity; then we study the brain to understand that activity; therefore we understand the nature of consciousness. This is a suitable shorthand depiction of the overall neuro-scientific approach, is it not?

So the real issue in all of this is whether a neurological account of the functioning of conscious brains really amounts to the same thing as a philosophical analysis of the nature of conscious experience. The neuroscientists would like to think it does, but I think I have shown that it does not.
KaseiJin
 
  1  
Reply Sun 6 Sep, 2009 05:19 pm
@jeeprs,
jeeprs;88539 wrote:
I think rather than repeat the earlier post, I will try and re-state it. . . .


I appreciate your efforts and will come back to some points from there, jeeprs, but am not satisfied that the learning process will have been brought to its full completion by dropping the line of thought first brought up by, and in, your analogy, and then simply trying to explain a point supporting your position.

You have denied that the event points on a hard drive, derived from an English sentence typed on a keyboard have meaning. That understanding is faulty, and the reason for not seeing the truthfulness of what I am saying might possibly be due to leaving out the source of input.

For that reason, I will offer a test, of sorts, to get my point across that the event points arrangement on that hard drive do have meaning because the sentence you put in there had meaning. (here I am working with bounds of the situation where you have not written gibberish, but a sentence with real meaning)

I will send you a PM, and without any research or investigation, (but with actually looking at it a bit) please tell me if you would tend to think that the test sentence that I will have written, is meaningful, or not. Then, having arrived at your conclusion on that, please make a post here in this thread, quoting the sentence from the PM (sentence from KJ) and telling us whether you reason the sentence had had no meaning, or not.

I will PM you within 5 minutes of having posted this. Thanks for helping out here, to make this most clear ! KJ
jeeprs
 
  1  
Reply Sun 6 Sep, 2009 05:48 pm
@KaseiJin,
KaseiJin;88543 wrote:
You have denied that the event points on a hard drive, derived from an English sentence typed on a keyboard have meaning.


Actually I didn't. I said that in order for them to be meaningful, they need to be interpreted. The bits of the message can only be said to be meaningful if they are interpreted. The meaning is not embedded in the message.
-------------------
Now I have received your test sentence. It appears to be in a language I don't know; at least I presume that it is a language, because it doesn't appear to be random string. But in either case, it has no meaning to me, because I can't intepret it.

I am not quite clear on the point that is being made by that? I would have rather thought it was contra your point of view. (As an aside, is it a language, and if so which?)
KaseiJin
 
  1  
Reply Sun 6 Sep, 2009 06:03 pm
@jeeprs,
Thanks for taking a look at it, and if you wouldn't mind (later on today when you get the time) please do quote that sentence (as is) out here.


jeeprs;88552 wrote:
Actually I didn't. I said that in order for them to be meaningful, they need to be interpreted.


Yes, but that is the equivalent of saying that the message has no intrinsic meaning in and of itself, which is not true--because it is due to its having meaning that it was put there as it was.

I'll keep my eye open here, and I do appreciate your working through this carefully with me ! Thanks !! :a-ok:
0 Replies
 
jeeprs
 
  1  
Reply Sun 6 Sep, 2009 11:38 pm
@jeeprs,
jeeprs;88539 wrote:
The analogy of the computer system was used to show that even though the bits that represent a string of text are stored on a hard drive, the meaning of the text depends on the interpretation by the computer operating system, and again by the human reading the text. Again the question is raised, to what extent could the meaning of the text be said to be 'really there' in the absence of interpretation?
(emphasis added).

This is what I said, and I still think it is hard to dispute. I think the problem we are having here is that this is a phenomenological rather than scientific analysis. I don't know if you are completely conversant with an analysis of this kind. (It is also possible that my analysis is incorrect, but in my view this hasn't been shown yet. If it is I will certainly modify my view).

I think the whole problem revolves around the idea of 'representation'. You are saying that a string of bits really represents a sentence irrespective of whether it is interpreted or not. But then, you sent me a sentence in a language I couldn't understand, so as far as I was concerned it had no meaning. The words may be meaningful to somebody, but not to me. It really only serves to illustrate the point I am making. Meaning is imputed by an intelligence; no intelligence, no meaning.

Let's see if we can either refute or agree on this point before going any further.

---------- Post added 09-07-2009 at 04:06 PM ----------

[INDENT] '...Science, in its general effort towards objectification, evolved a picture of the human organism as a physical system undergoing stimuli which were themselves identified by their physico-chemical properties, and tried to reconstitute actual perception on this basis, and to close the circle of scientific knowledge by discovering the laws governing the production of knowledge itself, by establishing an objective science of subjectivity. But it is also inevitable that this attempt should fail.' Maurice Merleau-Ponty, The Phenomenology of Perception, Routledge Classics, p12 [/INDENT]
KaseiJin
 
  1  
Reply Mon 7 Sep, 2009 12:46 am
@jeeprs,
It may well be the case that I have read you wrongly, or have taken an understanding which you may not have so intended to put forward, yet, let's stick with trying to get the understandings at least ironed out...at least to the point that we both can feel we have understood each other; matters of accuracy coming later.


jeeprs;87989 wrote:
I am writing on a computer. You can know all there is to know about this computer, be you a chip scientist from Intel or a Mac software engineer, but that won't necessarily mean that you can make sense of what is being written on here. You could also probably zoom in on the hard drive and find the actual bytes which represent this particular text, on the drive of a server. But then, if they are not interpreted by the operating system, displayed on a screen, and read by another human, how can they actually said to mean anything?(color and bold mine)


Within the context of this passage (and please note, this is the very original), we are pretty much required to accept that the referent for the pronoun it in the blue bold clause, you are setting as that text--which is what the bytes make on the hard drive. If this is incorrect, please explain how it might be.

Then, within the context of that passage, we are pretty much at the mercy of the wording there to understand that by saying " if they are not interpreted . . . how can they . . . mean anything" leaving us with the implied assertion that there is no meaning in the text unless it is interpreted by a second or third party. If this is incorrect, please explain how that might be.

I argue that it is not so much a matter of any scientific view, nor even any philosophical view, really; I instead reason (at this point) that it is rather a matter of a pragmatic paradigm. Also, yes, you have gotten my assertion correctly.

I suggest that just because you cannot understand the text I have sent you (and I do wish you'd quote that here on this thread...again...unless you have some reserve about doing so (I can do it if you'd like...[and this is no trick or anything, just a simple test to explain what I am saying more clearly]) only means that you cannot understand its meaning, not that it has no meaning in it. This is true in that even the text typed into the computer (as given in the original analogy) had meaning to the writer of the text--so. . . the text has meaning. Regardless of whether anyone else interprets that text on the hard drive, or not, the text has meaning because it is a translation into binary language from the English text which has meaning--and therefore that binary code text has the very same meaning as the English that it had been translated from. It is exactly like going from one language into another...and binary code, Morse code, or hieroglyphics, it's essentially a matter of translation.
salima
 
  1  
Reply Mon 7 Sep, 2009 12:59 am
@KaseiJin,
KaseiJin;88629 wrote:
It may well be the case that I have read you wrongly, or have taken an understanding which you may not have so intended to put forward, yet, let's stick with trying to get the understandings at least ironed out...at least to the point that we both can feel we have understood each other; matters of accuracy coming later.




Within the context of this passage (and please note, this is the very original), we are pretty much required to accept that the referent for the pronoun it in the blue bold clause, you are setting as that text--which is what the bytes make on the hard drive. If this is incorrect, please explain how it might be.

Then, within the context of that passage, we are pretty much at the mercy of the wording there to understand that by saying " if they are not interpreted . . . how can they . . . mean anything" leaving us with the implied assertion that there is no meaning in the text unless it is interpreted by a second or third party. If this is incorrect, please explain how that might be.

I argue that it is not so much a matter of any scientific view, nor even any philosophical view, really; I instead reason (at this point) that it is rather a matter of a pragmatic paradigm. Also, yes, you have gotten my assertion correctly.

I suggest that just because you cannot understand the text I have sent you (and I do wish you'd quote that here on this thread...again...unless you have some reserve about doing so (I can do it if you'd like...[and this is no trick or anything, just a simple test to explain what I am saying more clearly]) only means that you cannot understand its meaning, not that it has no meaning in it. This is true in that even the text typed into the computer (as given in the original analogy) had meaning to the writer of the text--so. . . the text has meaning. Regardless of whether anyone else interprets that text on the hard drive, or not, the text has meaning because it is a translation into binary language from the English text which has meaning--and therefore that binary code text has the very same meaning as the English that it had been translated from. It is exactly like going from one language into another...and binary code, Morse code, or hieroglyphics, it's essentially a matter of translation.


maybe it is more a question of the fact that the text only has meaning because we gave it meaning-we or someone or something.

otherwise, what you seem to be saying is that any gibberish at all we type has meaning. your argument would be how can you say it does not, just because you dont understand it?

am i following this properly?
0 Replies
 
KaseiJin
 
  1  
Reply Mon 7 Sep, 2009 01:38 am
@BrightNoon,
Yes, salima . . . however, please do take note that I have not been making any further applications since my first post towards jeeprs original post offerring the computer analogy. This is because of his response towards my response of that original. For that reason, at the moment, we are only talking about that computer analogy.

I had used the gibberish thing earlier to simply show that when typing onto to the computer's hard disk, it need not be a sentence with meaning, or even a real word, per se, because that which is written will be translated into the language on the hard drive. Basically, one could do the very same thing on paper, actually. (which is why the test, in one point-- [I hope jeeprs will post it, or allow me to...I had mentioned such from the beginning today, and mentioned it since, and he has posted, and had recieved the PM, so I am a little worried...it's not a trick of some kind])

If I took a piece of paper and wrote a Chinese character on it, no one who could not read Chinese characters would know the meaning of that symbol on the paper. It is a fact, however, that that symbol would have meaning--even if one could not understand it--and of course, since I wrote it in translation of an English word, that symbol on the paper is a translation. I translated an English word into a Japanese word using the specific Chinese character, on that piece of paper.

Do you get the drift of what I am saying. The heart of the matter is, that it is, indeed, an act of translating...not transforming. I hope jeeprs posts that quote soon enough...but I am worried....
0 Replies
 
jeeprs
 
  1  
Reply Mon 7 Sep, 2009 01:47 am
@BrightNoon,
OK that test sentence was:

[INDENT]mujhe bharathiya khana bahoot pasand hai[/INDENT]
0 Replies
 
Pathfinder
 
  1  
Reply Mon 7 Sep, 2009 04:06 am
@BrightNoon,
What you are talking about just goes back to the original issue of whether or not the brain requires some external mechanism than its own design to accomplish something.

You are suggesting that the brain is able to translate these bytes regardless of the individuals capability, that it is somehow ingrained in the functional components of the brain to be able to do so whether a person knows that language or not.

We dispute that entirely saying that the translation is an ability acquired by an external source of the individuals that is not originated in the brains capability but must first be put there by experience and decision.
0 Replies
 
salima
 
  1  
Reply Mon 7 Sep, 2009 05:46 am
@BrightNoon,
of course i can read it, kj!!!

but my thought is that any computer could translate it into english if it were so programmed with all the languages in the world. at the same time, could a computer make up a new language?

the translator or interpreter are neither as important as the growth and evolution of the language itself as it grew out of the minds of human beings.

kj are you saying that your test sentence will be translated into the binary language by a computer? and likewise so will any gibberish at all that we write which doesnt exist in any language? even symbols that are not a part of any language? if so, the computer will know it doesnt make sense i guess...

i am not sure how all this applies to the issue, but i am sure it does. the fact is the computer is doing translating (which the human is also capable of) but the human being is doing transforming by attaching meaning to certain symbols in the first place. both can interpret i think.
0 Replies
 
jeeprs
 
  1  
Reply Mon 7 Sep, 2009 05:57 am
@BrightNoon,
care to let me in on the secret here? My curiosity is overpowering the enjoyment I am getting out of the fact that my not being able to read it supports my argument.
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 12/29/2024 at 08:22:25