Fil Albuquerque
 
  0  
Reply Mon 31 Oct, 2011 11:12 am
@fresco,
bullshit...you are a mystifier as many others like you...I know you better then you think...
But I am here waiting for your account of the human special case...explain it without the usual crap...be straightforward please...

...and what do you mean with observer independent ontology exactly eh ? that things which affect you are not there or that your description of them is biased ? what a freaking salad your hiding behind big words...
Fil Albuquerque
 
  1  
Reply Mon 31 Oct, 2011 11:28 am
@Fil Albuquerque,
igm
 
  1  
Reply Mon 31 Oct, 2011 11:34 am
@Fil Albuquerque,
This is also worth a look at...

http://www.users.globalnet.co.uk/~lka/conz.htm
0 Replies
 
fresco
 
  1  
Reply Mon 31 Oct, 2011 12:19 pm
@Fil Albuquerque,
Humans are a special case insofar that they uniquely use" languaging behaviour" which is the basis of all measurement/mathematics (lowest level of measurement= nominal=naming). And as Maturana argued, all "observation" involves languaging, and although humans are not a special case from a biological point of view, "observation" and its sub-issue of "information" are unique communicative realms for our species which we wrongly ascribe to other species.

That is my last comment regarding your belief system on this thread.
0 Replies
 
JLNobody
 
  2  
Reply Mon 31 Oct, 2011 12:32 pm
@Fil Albuquerque,
You've said some interesting things on the last page. But they are interesting mainly in the sense that science fiction (in this case philosophical fiction) is entertaining. It astounded me to see you accuse Fresco--and others like him (there are no others like him around here)--of being a mystifier. You say that you know him better than he thinks. It may seem so to you, but only because you are projecting your own disposition onto him. YOU are the grand mysterier around here. I hope for your sake you can translate this into a profession of film/book making: you might compete with the producers of The Matrix.
By the way North has very constructively noted your worst Achilles' Heel: "...and its not just about consciousness but also about the sub-conscious."
Fil Albuquerque
 
  1  
Reply Mon 31 Oct, 2011 12:34 pm
fresco
 
  1  
Reply Mon 31 Oct, 2011 01:34 pm
@JLNobody,
Yes, speaking of "the mysterious" one side issue is that of its antithesis oversimplification . "Believers" are selective of evidence. In this respect Fil's selection of "computational" brain studies ignores the fact that they naturally dictate contrived observational structures, such as binary decision scenarios which rarely occur in "real life".
0 Replies
 
igm
 
  1  
Reply Mon 31 Oct, 2011 01:46 pm
@Fil Albuquerque,
Fil Albuquerque wrote:

[youtube]http://www.youtube.com/watch?v=aw9Jo5qNCsQ&feature=relmfu[/youtube]

So he says: We ‘know’ that (6:20) consciousness is distributed across the cortex in a ‘sort of’ holographic fashion … then later… we know that memory is not stored in one place and ‘probably’ consciousness is the same (6:30) (note that seconds before he says that ‘We ‘know’ that it is distributed in a hologram like way… this is rubbish. He doesn’t know where this image is because it has not been found by anyone except the subject perceiver (who is just the interaction between memory and consciousness) … I’d say the objective investigator will ‘never’ find it because it is not located in time or space and neither is the reality that science examines… science studies subjective reality and doesn’t realize it. Anyway he speaks quickly and moves on past this because he knows ‘nothing’ about the location of this mental image and just uses words like ‘holographic distribution’ as ‘smoke and mirrors’ for his lack of evidence. As he moves on past the unknown we are blinded by detailed science based on 'scientific faith' i.e. reason standing on the shoulders of faith i.e. fundamental assumptions about the unknown.
fresco
 
  1  
Reply Tue 1 Nov, 2011 01:45 am
@igm,
I've always taken the significance of Hameroff to be his suggestion that consciousness may be a "quantum phenomenon", which historically implies the interaction of observer and observed, and non-locality. The fact that he suggests a general location for this phenomenon (in the micro-tubules) is an aspect of his keeping one foot in conventional "brain science" and representationalism.
igm
 
  1  
Reply Tue 1 Nov, 2011 05:47 am
@fresco,
Yes, it is interesting... but... when science shows me the (internal) phenomena of another person and doesn't just give me measurements that seem to correspond to it... then I'll believe that consciousness can be explained using 'brain science' or when scientists can show me the subjective internal light (mental images) and sounds (internal dialogue) of my conscious experience then I'll start to believe that science has the answers to what consciousness is.

Until then I'm happy with the unexplained because I've investigated phenomena and can't see a way their true nature could be explained... I'm happy with the practical relative use of science... and life i.e. the interaction and communication with so-called others... but who seem to be all aspects of the same reality with the same ineffable nature.
0 Replies
 
igm
 
  1  
Reply Tue 1 Nov, 2011 09:54 am
@fresco,
In particular... this is... interesting:

Quote:
This idea that only one state of the brain will be consistent with the geometry of conscious observation satisfies the Copenhagen Interpretation of quantum mechanics. In the Copenhagen Interpretation it is proposed that events that can, in principle, be observed by a classical, conscious observer will have a state that is not described by an extended wavefunction. This classical observation is a particular geometric form which we know as a 'view' and this form is self evidently part of the form that is conscious observation. This would mean that any non-conscious brain activity that is not consistent with the conscious view will not only fail to be seen but for all practical purposes will not even continue to exist.

Continuity must be an important element in the phenomenon of experiencing only one state for the brain because at any moment our conscious experience is a continuation of what occurred before. Whatever causes the selection of a particular brain state is closely related to the form of the preceding conscious experience. It is as if conscious experience provides a template for succeeding states of the brain. The principle template for neural processing would be the form of the sensory world derived fairly directly from the senses.

The non-conscious processes in the brain depend on the view; without it they would become wholly dependent on the statistical state of the world as perceivable by some other conscious observer. This means that the form of our conscious experience may control the state of our brain, not through simple processes such as sending signals from place to place, but by selecting only that state of the brain that is consistent with the presence of the conscious experience. (cf: Dennett & Kinsbourne's (1992) Multiple Drafts Model).
Fil Albuquerque
 
  1  
Reply Tue 1 Nov, 2011 10:54 am
@igm,
...mind that not in any moment did I adopted a naive realist approach in this matter, so far I have spoken in a relational function between external data and the accommodation processes in the brain who obviously transform such data in a functional parameter which then can be perceived and experienced by the conscious mind...what I don´t give up believing is in the equivalence of information value independently of the language or code in use in between the qualia and the subject...
fresco
 
  1  
Reply Tue 1 Nov, 2011 11:06 am
@igm,
Hmmm...I see a problem with

Quote:
The principle template for neural processing would be the form of the sensory world derived fairly directly from the senses.


If you read up on "embodied cognition" e.g. http://plato.stanford.edu/entries/embodied-cognition/
you will find the arguments against "representationalism" including those question the traditional concept of "the senses" directly relaying information about "the world".

On the other hand, Dennett (et al)'s concept of a continuity of cognitive state transitions which select what constitutes "information" at any time is consistent with embodiment (non duality) concepts of "reality".

Varela, a proponent of embodiment, has a good line in classifying traditional "materialists"and "idealists" . He sees the "consciousness problem" as the "chicken and egg problem". The chicken position: there is a world "out there" with pregiven properties. The egg position the cognitive system projects its own world. Since experiments show that neither pole is tenable, Varela concludes that"consciousness"(aka "cognition") is an active process involving the dynamic inter-relationship of "inner" and "outer" states. The inner state (brain/body) is co-extensive with outer states (physical and social environment). Thus "the state of hunger" brings forth a world of potential food candidates, and as the hunger state changes to a sated one, the world is re-created according to other needs/interests. Neither "the world" nor "consciousness" is ever static.
(For an interesting digression consider cannibalism due to starvation and the accompanying states of consciousness)

igm
 
  1  
Reply Tue 1 Nov, 2011 12:54 pm
@Fil Albuquerque,
What is your view/comment on this (from my previous post link):

Strong AI is problematic for several reasons. Firstly it holds that a set of 'bits' can be conscious. 'Bits' can be any identifiable thing, steel balls in boxes can be 'bits' so strong AI holds that an array of balls in boxes can have conscious experience (see Note 1). Secondly, from the outset strong AI assumes that conscious experience is a process, however when you look around the room conscious experience is a set of things laid out in space at an instant; conscious experience is a geometrical form or state, not a process. It is the acquisition of the data within conscious experience that is a process, not the experience itself. A geometrical state such as conscious experience is lost if it is encoded as a stream of bits in an information system, the stream of bits will no longer have the geometrical form that is conscious experience. Lastly, strong AI contains data in a form that only has meaning to someone who has a lookup table from the symbols in the system to their significance as a state. It should be stressed that these arguments against strong AI/the simulation argument are not arguments against a physical basis for consciousness, a machine that replicated all the essential features of the brain could well become conscious but this consciousness would not itself be due to information processing. The contents of consciousness would be delivered by information processing but conscious experience itself is a state, a geometrical form. The machine would require a separate, consciousness module, that worked on different principles from an information processor.

Note 1: The difference between information as an encoding of a state and a particular state itself is geometrical. If an encoded state has an identical geometry to a particular state (congruent at every level) then it is a replication rather than a simulation. This suggests that the 'Knowledge Argument' is actually a form of the 'Simulation Argument' because the colour red studied by Mary is only identical to the 'red' in perception if it replicates the form of this 'red' exactly - 'red' encoded in a non-congruent form is a different entity.
igm
 
  1  
Reply Tue 1 Nov, 2011 01:26 pm
@fresco,
fresco wrote:

...Since experiments show that neither pole is tenable, Varela concludes that"consciousness"(aka "cognition")...


Thanks! I'll study your reply but (above) confuses me because I don't see the term 'consciousness as equivalent to 'cognition'.

http://consc.net/papers/c-and-c.html
"Much progress is being made in the study of cognition, but consciousness itself is as much of a problem as it ever was."
Fil Albuquerque
 
  1  
Reply Tue 1 Nov, 2011 02:41 pm
@igm,
...the funny thing is that I end up being more idealist and relativist then you all...aside abstract bits, I don´t believe in any states if not in systemic relational context...every states presuppose therefore process in my view, the state is not independent nor static but always dynamic and thus related with other strings which alone would be not functional and meaningless...another thing that I noted implied in your view as a contradiction was that you end up believing a certain kind of base meaning should be intrinsic to things otherwise we need the code to solve the encryption as an argument, when I in turn believe that meaning is always functional or relational, such that we need to universally recognize geometric patterns in code and then adapt this geometrical patterns functionally to our own needs, thus selecting from those chained patterns the segments that we can functionally absorb and accommodate, subsequently transforming the messages assembled but not the base patterns themselves as some will be ignored as noise and impaired from the chain and others will be paired in their place jumping forward or back at convenience resulting in a re interpretation of final meaning although the base paternicity recognition being universal (base function of systems like "Lego")...remember that codes can be broke without the solution being presented at once...Finally concerning your argument on the conscience experience on being a "thing", we would have to debate first, what thing is not itself a process that you can recall...I have the impression that the permanence of a sensation as a "form" in the short term memory may be the cause for the intuitive feeling that we all have on the experience of conscience being a thing in itself instead of a complex relational process...
Fil Albuquerque
 
  1  
Reply Tue 1 Nov, 2011 03:01 pm
@Fil Albuquerque,
(above edited for better clarification)
0 Replies
 
JLNobody
 
  2  
Reply Tue 1 Nov, 2011 03:33 pm
@igm,
IGM, I agree with both points (1) cognition is an aspect of consciousness, not a synonym for it*, and (2) a degree of progress may be made in the study of cognition by modeling the latter on some characteristics of artificial intelligence, but in no way does that help us with the problem of consciousness.
*when meditating, for instance, one sees thoughts swimming in a pool of consciousness, but while the two demonstrate considerable interdependence they are not isomorphic.
0 Replies
 
igm
 
  1  
Reply Tue 1 Nov, 2011 04:39 pm
@Fil Albuquerque,
... and your thoughts on this:

It is still easy to fall into confusion or to equivocate when talking of "consciousness", so here I will divide the first-person problem into three parts: the problems of sensory qualia, subjective mental content, and the existence of subjective experience.

(1) The problem of sensory qualia

Qualia are the qualitative aspects of our mental states, most obviously of our sensations. The paradigm qualia are those of color sensations; other favorites are the taste of chocolate, the sound of middle C, pleasure and pain. All of these are poorly understood. When we look at a red patch, this sets off a particular pattern of neural firings in our brain. Why should this physical process be accompanied by a rich, subjective sensation? Given that it is accompanied by a sensation, why is it this sort of sensation (the red sort) rather than that sort (the green sort)? There are two issues here: why qualia exist at all, and why particular qualia accompany particular processes. Is the correspondence of qualia to processes arbitrary, or is there some systematicity that we do not understand?

Jackson (1982) has provided the most recent reminder of the qualia mystery, with a sharpening of the argument of Nagel (1974) before him. A future scientist, living in a time when neuroscience is completely understood, might learn everything there is to know about physical brain-processes. But if she has lived all her life in a black-and-white room, she will still not know what it is like to see red; when she sees red for the first time, she will learn something. It seems that the third-person approach, at least as currently understood, cannot tell us about the nature of qualia.

(2) The problem of subjective mental content.

When I think about a lion, something takes place in my subjective experience that has something to do with lions. Again, a straight physical account gives no reason to believe that such an experience should take place. What should a pattern of neural firings have to do with lions? But somehow, my thoughts are about something; they have subjective mental content. It is easy to make attributions of mental content to a system, justified perhaps by causal relations with the external world, but for subjective mental content we need something stronger. We need brain-states to carry intrinsic content, independent of our systems of external attribution; there must be a natural (in the strongest sense, i.e., defined by nature) mapping from physical state to content.

The problem of subjective mental content is not entirely different in kind from that of sensory qualia - the experience of content is itself qualitative, in a way. The main difference is that sensory qualia usually arise during external perception, whereas this sort of mental content arises during thought. (There is also a third-person problem of mental content, which has been raging for years, centering on the question of how we can assign propositional attitudes, such as beliefs and desires concerning the world, to systems and persons. In some ways this is an easier problem, as it may rely on human-defined systems of attributions; these contents may have the status of theoretical entities, rather than states that are presented to us directly. In other ways, the first-person problem is easier, as it may not have to deal with the problem of reference. When I think of a lion, my phenomenology bears some relation to a lion, but the relationship seems more like shared pattern than reference.)

(3) The existence of subjective experience

The two items above are concerned with the nature of our subjective states - why they are one way rather than another. But it is just as deep a problem why subjective states should exist in the first place. Why should it be like anything to be me? If I did not know that subjective states existed, it would seem unreasonable to postulate them. This is perhaps the deepest question of all, and no current theory has come close to dealing with it.

Not many people believe in zombies - humans with normal behavior but without any subjective mental states. These may be logically possible, but it seems implausible that there could be such things in the actual world. At least some people believe there could be functional zombies, however: beings which duplicate the functional organization of humans, perhaps computationally, without being conscious at all (e.g. Searle 1980, Block 1980). The question "what sort of entities can be subjects of experience?" is of great popular interest. For example, is every entity exhibiting intelligent behavior conscious? Could an appropriately programmed computer be conscious? I will argue, by combining first-person and third-person considerations, that the possible existence of functional zombies is implausible.

There are some other commonly-raised first-person problems not explicitly listed above. The problem of self-consciousness (or self-awareness) I take to be a subset of the problem of awareness, of which the difficult aspects are covered by (1) and (2). The problem of personal identity is a separate issue and a very deep one; but Parfit's exhaustive analysis (1986), which combines the first-person and third-person approaches to great effect, gives reason to believe that our first-person intuitions here may be mistaken. It is the three problems listed above that seem to be the residual content of the traditional mind-body problem. I will be using the term "consciousness" broadly to cover the phenomena of all of these problems. If you prefer, replace every occurrence of "consciousness" with "the subjective experience of qualia and mental content."

One difficulty with talking about first-person problems is that for every first-person (conscious) mental state, there is a corresponding third-person (functionally definable) mental state. (Perhaps there are not two different mental states, but simply two different ways of viewing one state; this is unclear. In any event, it is uncontroversial that for every subjective mental event there is a corresponding physical event; the physical event may be viewed via functional abstraction as a third-person mental event). For every subjective sensation there corresponds an objectively characterizable perception. This dichotomy in (ways of looking at) mental states makes things a little confusing, but it will be useful later on.
http://consc.net/papers/c-and-c.html
igm
 
  1  
Reply Tue 1 Nov, 2011 04:46 pm
@igm,
Fil...last post updated...
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » An Objective View
  3. » Page 4
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.06 seconds on 12/23/2024 at 07:40:17