1
   

*new* mind is more than brain (???)

 
 
Reply Fri 17 Jul, 2009 05:21 pm
Alright I'm going to say "no" on this one.

I mean I've thought about the possibility of there being some kind of anima mundi that plays some role in the mind but I'm not going to assume such an epiphenomenal intrusion for more or less the same reason that I'm not going to assume there's an invisible dragon in the garage, as per Carl Sagan:[INDENT]Suppose (I'm following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you'd want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

"Show me," you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle -- but no dragon.


"Where's the dragon?" you ask.


"Oh, she's right here," I reply, waving vaguely. "I neglected to mention that she's an invisible dragon."


You propose spreading flour on the floor of the garage to capture the dragon's footprints.


"Good idea," I say, "but this dragon floats in the air."


Then you'll use an infrared sensor to detect the invisible fire.


"Good idea, but the invisible fire is also heatless."


You'll spray-paint the dragon and make her visible.


"Good idea, but she's an incorporeal dragon and the paint won't stick." And so on. I counter every physical test you propose with a special explanation of why it won't work.


Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there's no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I'm asking you to do comes down to believing, in the absence of evidence, on my say-so. The only thing you've really learned from my insistence that there's a dragon in my garage is that something funny is going on inside my head. You'd wonder, if no physical tests apply, what convinced me. The possibility that it was a dream or a hallucination would certainly enter your mind. But then, why am I taking it so seriously? Maybe I need help. At the least, maybe I've seriously underestimated human fallibility. Imagine that, despite none of the tests being successful, you wish to be scrupulously open-minded. So you don't outright reject the notion that there's a fire-breathing dragon in my garage. You merely put it on hold. Present evidence is strongly against it, but if a new body of data emerge you're prepared to examine it and see if it convinces you. Surely it's unfair of me to be offended at not being believed; or to criticize you for being stodgy and unimaginative -- merely because you rendered the Scottish verdict of "not proved."


Imagine that things had gone otherwise. The dragon is invisible, all right, but footprints are being made in the flour as you watch. Your infrared detector reads off-scale. The spray paint reveals a jagged crest bobbing in the air before you. No matter how skeptical you might have been about the existence of dragons -- to say nothing about invisible ones -- you must now acknowledge that there's something here, and that in a preliminary way it's consistent with an invisible, fire-breathing dragon.[/INDENT]So, without resorting to idle metaphysical speculation, and instead trying to make out the gestalt that all the scientific data we have point to, I am going to assume that the consciousness is an emergent phenomenon of the brain.

This has some interesting consequences. It leads me to believe live that consciousness exists on a continuum, because no one neuron is reponsible for the whole experience. So an array of disembodied rat neurons like this one:

http://neurophilosophy.files.wordpress.com/2006/08/mea.jpg

being used as a kind of living computer has some kind of conscious experience, though very diminished compared to that of a full-fledged rat.

I am also led to believe that an artificial brain built along very similar lines as our own will exhibit consciousness, and that neuromimetic devices could restore and even enhance various brain functions. (In fact, they are even now.)

Two interesting standing questions for me are: "If the human brain were outfitted with the ability to sense infrared or use echolocation, how would these things be perceived?" and "What else could exhibit consciousness?" e.g., an artificial brain based on belief networks or something similar.

(Oh and one more thing: the US medical industry has nothing to do with this thread. Let's get that out of the way now.)
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 1 • Views: 5,156 • Replies: 114
No top replies

 
William
 
  1  
Reply Fri 17 Jul, 2009 06:26 pm
@odenskrigare,
Hello.................again, Oden.

Your intellect is indeed exceptional for someone so young. I am going by the photo you submitted. If indeed that is you? Your failure to "connect" with many members of this forum, IMO, is your EGO. Your "fire breathing dragon" analogy has been the subject of discussion for centuries. Not everything in this universe is detectable or empirical. You have motives that drive you in that learning comes easy for you. You have an unquenchable thirst as do most truly gifted people who possess the thirst. It must be satisfied. Kinda like that little robot "number 5" in the movie "SHORT CIRCUIT" back in 1986 who had a "thirst for knowledge". Unlike his fellow robots who were cold and calculating, he got struck by lightning and was "humanized". This little guy could not get enough "Input" and he devoured every thing he could get is metallic hands on, Encyclopaedias, television, everything. He was being "re-programmed" by "other input" and not just that what the military had introduced. He was designed to be a weapon and was subject to the programming the military needed to engage in conflict. But the lightning strike change all that and he began getting other input that "over-rode" his initial programming and developed a better understanding that said "conflict was bad". The only difference in you and this "robot" is you haven't been "struck by lightening" yet, and are used to "winning conflicts" to the point that your ego will not hear of defeat. Your EGO, can't except it even if it suffers pyrrhic victories which kill any friends you could have had as you have learned to live without them. You don't need them if they so much as challenge your intellect. Such was illustrated in your initial post after you became a member of this forum; and I quote:

"My guiding principles are "look out for #1" and "odi profanum vulgus et arceo" ... when you come down to brass tacks, most people are full-blown morons with whom there is no benefit in associating. If they were on fire, I wouldn't piss on them to put it out. I'm in favor of tossing the masses a few nickels if it shuts them up but really all other government policy should be geared towards the elite, with allowances for upwards mobility ... if you can earn it. Also we're a doomed species"

"Doomed species, huh?" Talk about self-fulfilling prophecies. Who would expect any warmth from a person with such negative outlooks on mankind. In you effort to "change brains", I can understand why you would endeavor to be a fan ot this technology being we are doomed anyway so what is there to lose, right? All you need my young friend is a "lightning strike" of humility and you get that warmth from other people you consider morons. I have only posted this in hopes that you might hear my words and think about them a little.

I have never posted anything like this before and just consider it the "father" instinct in me. I wish you well in your "new" thread and good luck in perhaps toning down your "output" and concentrate on the "new" input others might offer.

William

Mods, sorry if this post is out of line. If so do as you will, as always.
0 Replies
 
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 06:31 pm
@odenskrigare,
Ok can we not talk about me but about the thread rubric, don't get my thread locked two posts in

But yeah that really is a pic of me. So is this:

http://content9.flixster.com/question/47/50/93/4750935_std.jpg

SO, GUYS, HOW ABOUT THE MIND THEN?
Zetetic11235
 
  1  
Reply Fri 17 Jul, 2009 07:26 pm
@odenskrigare,
odenskrigare;78034 wrote:
Ok can we not talk about me but about the thread rubric, don't get my thread locked two posts in

But yeah that really is a pic of me. So is this:

http://content9.flixster.com/question/47/50/93/4750935_std.jpg

SO, GUYS, HOW ABOUT THE MIND THEN?


This is a painting of me. My friend did it as I sat and solved the problems of the Universe.

http://wordincarnate.files.wordpress.com/2008/11/prophet.jpg

Source:http://wordincarnate.files.wordpress.com/2008/11/prophet.jpg

Personally I wonder how fruitful this discussion could actually be. You have made a pretty broad epistemic statement and shown how you have drawn your conclusion. I can present my spin on it.

I think that the mind is a phenomena that emerges when enough neurons are sustained in some sort of nutrient rich medium. It is primarily an organizational phenomena by which the individual neurons work synchronistically with any other detectable neurons. In any case, it seems to me that the mind emerges due to the simultaneous organization of the constituent neurons, sort of like an extreme form of swarm intelligence, where the constituents have a very simple set of possible actions and in no way cold be considered intelligent.

It could be the case that there is a hierarchy based on not just the ammount of neurons, but the organizational efficiency of said neurons. This efficiency could be the result of the nutrition of the neurons (glia cell count, possibly other sources). This would explain why someone can loose a large portion of their brain and still function, as long as the core organizational structure is intact.

Neuronscience isn't really my specialty, so any deeper understanding that someone might want to share would be appreciated.
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 07:35 pm
@Zetetic11235,
Zetetic11235;78037 wrote:
It could be the case that there is a hierarchy based on not just the ammount of neurons, but the organizational efficiency of said neurons. This efficiency could be the result of the nutrition of the neurons (glia cell count, possibly other sources). This would explain why someone can loose a large portion of their brain and still function, as long as the core organizational structure is intact.


There was an article about this just today citing a prof from my uni!

The Fancier The Cortex, The Smarter The Brain?
0 Replies
 
jgweed
 
  1  
Reply Fri 17 Jul, 2009 07:57 pm
@odenskrigare,
"...consciousness is an emergent phenomenon of the brain."

In one way, this statement seems correct, in that for all we know the brain and its workings are the cause of consciousness in us as well, in a simpler form, in animals. In another way, siince humans have seeming similar consciousnesses and this most likely cannot be caused solely by the brain, the statement seems only a partial explanation.

We would not expect an artificial brain however complex, even it could receive cognitive data, to be able to process it on its own without human programming. Still less, perhaps, can the human brain do so.
Could an artificial brain, on its own, learn to comprehend that it has a closed past, a fleeting present, and a future into which it must reach?
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 08:16 pm
@jgweed,
jgweed;78042 wrote:
In another way, siince humans have seeming similar consciousnesses and this most likely cannot be caused solely by the brain, the statement seems only a partial explanation.


huh

jgweed;78042 wrote:
We would not expect an artificial brain however complex, even it could receive cognitive data, to be able to process it on its own without human programming. Still less, perhaps, can the human brain do so.


People are frequently confused about what a computer is. We are computers of a kind, and the most recent ongoing attempts at artificial intelligence depend on architectures which are fundamentally different from the conventional von Neumann type systems we use everyday in many respects. One of the most important differences is that they are self-programming.

I cannot stress this enough: get out of the fan-cooled beige von Neumann box if you want to come to full understanding. I doubt it would be possible to make an artificial brain programmed specifically to respond to every case in the world, but by using neural architectures we can make a system that constantly learns on its own, and develops useful invariant representations of different causes, becoming able to respond flexibly to its environment.

These ideas have all of them not come full circle, but I should point out that many of their essential principles have been carried out successfully not merely in the laboratory, but in the commercial sector as well. Even as we speak, increasingly realistic biologically-inspired artificial neural networks are being pitched in industry by, e.g., Numenta and Imagination Engines, and you even have researchers figuring out how to make synthetic, but biologically realistic neural networks to control power grids based on experiments with disembodied rat neurons:

This is your grid on brains - Missouri S&T News and Events[INDENT]Led by Dr. Ganesh Kumar Venayagamoorthy, associate professor of electrical and computer engineering, the researchers will use living neural networks composed of thousands of brain cells from laboratory rats to control simulated power grids in the lab. From those studies, the researchers hope to create a "biologically inspired" computer program to manage and control complex power grids in Mexico, Brazil, Nigeria and elsewhere.

"We want to develop a totally new architecture than what exists today," says Venayagamoorthy, who also directs the Real-Time Power and Intelligent Systems Laboratory at Missouri S&T. "Power systems control is very complex, and the brain is a very flexible, very adaptable network. The brain is really good at handling uncertainties."


...


Through this research, Venayagamoorthy and his colleagues hope to develop what he calls BIANNs, or biologically inspired artificial neural networks. Based on the brain's adaptability, these networks could control not only power systems, but also other complex systems, such as traffic-control systems or global financial networks.

The Georgia Tech researchers, led by Potter, have developed living neural networks that can control simple robots, but this will be the first time anyone has attempted to tap the brain power to control more complex systems.

After testing the system in simulated environments, the researchers will then test them in actual power grids in Mexico, Brazil, China, Nigeria, Singapore and South Africa. One goal of the project is to develop a system that can be implemented in the "future intelligent power grid," says Venayagamoorthy. The researchers envision that grid integrating a variety of power sources, such as wind and solar farms, energy storage facilities, self-sustainable community or neighborhood micro-grids, and other non-traditional energy sources.[/INDENT]So yeah in the near future it is very likely that power grids, for example, will be managed by little synthetic brains.

jgweed;78042 wrote:
Could an artificial brain, on its own, learn to comprehend that it has a closed past, a fleeting present, and a future into which it must reach?


Uh yeah why not?
0 Replies
 
paulhanke
 
  1  
Reply Fri 17 Jul, 2009 08:34 pm
@odenskrigare,
odenskrigare;78027 wrote:
This has some interesting consequences. It leads me to believe live that consciousness exists on a continuum, because no one neuron is reponsible for the whole experience. So an array of disembodied rat neurons like this one:

http://neurophilosophy.files.wordpress.com/2006/08/mea.jpg

being used as a kind of living computer has some kind of conscious experience, though very diminished compared to that of a full-fledged rat.


... I am disinclined to agree with that assertion ... emergents that are simple enough for us to study are typically layered ... for example, you don't automatically get liquid water by pumping a room full of two parts oxygen and one part hydrogen ... you only get water by first causing oxygen and hydrogen atoms to organize into H2O molecules and then allowing a large collective of the emergent properties of H2O molecules interact at room temperature to produce the higher-level emergent property "liquid" ... given that such a simple example as this is layered, I would hazard to guess that conscious experience is pretty far removed from neurons in terms of intermediate emergent layers ... so a small network of disembodied neurons is probably not enough to realize many of the intermediate emergent layers that pave the road to mind, let alone conscious experience ...

odenskrigare;78027 wrote:
... and even enhance various brain functions. (In fact, they are even now.)


... that being said, technology in general is already enhancing various mind functions (ever read Andy Clark's "Natural Born Cyborgs"?) ...

odenskrigare;78027 wrote:
"If the human brain were outfitted with the ability to sense infrared or use echolocation, how would these things be perceived?"


... probably as space filled with objects ... there is a technology that has been developed for the blind that turns visual scenes into tactile representations ... at first, it just feels like a random tickle on the skin ... after a while, the brain begins to correlate the patterns in these tickles with other proprioceptive and perceptive inputs ...

odenskrigare;78027 wrote:
... and "What else could exhibit consciousness?" e.g., an artificial brain based on belief networks or something similar.


... possibly ... but in the case of belief networks, it would be a distinctly non-human consiousness (the human mind notoriously sucks at intuiting conditional probabilities) ... but I would also suggest that this artificial brain would need to be given embodiment and put in charge of it's own survival before it would have any chance of exhibiting consciousness - without the responsibility of self-perpetuation, there doesn't seem to be any reason for it to develop a sense of "self" and "other" ...
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 08:52 pm
@paulhanke,
paulhanke;78047 wrote:
... I am disinclined to agree with that assertion ... emergents that we are simple enough for us to study are typically layered ... for example, you don't automatically get liquid water by pumping a room full of two parts oxygen and one part hydrogen ... you only get water by first causing oxygen and hydrogen atoms to organize into H2O molecules and then allowing a large collective of the emergent properties of H2O molecules interact at room temperature to produce the higher-level emergent property "liquid" ... given that such a simple example as this is layered, I would hazard to guess that conscious experience is pretty far removed from neurons in terms of intermediate emergent layers ... so a small network of disembodied neurons is probably not enough to realize many of the intermediate emergent layers that pave the road to mind, let alone conscious experience ...


Well all bets are off, I mean we can't even ask these things how they feel at this juncture.


paulhanke;78047 wrote:
... that being said, technology in general is already enhancing various mind functions (ever read Andy Clark's "Natural Born Cyborgs"?) ...


No but I get the idea

paulhanke;78047 wrote:
... probably as space filled with objects ... there is a technology that has been developed for the blind that turns visual scenes into tactile representations ... at first, it just feels like a random tickle on the skin ... after a while, the brain begins to correlate the patterns in these tickles with other proprioceptive and perceptive inputs ...


Didn't that blind Mt. Everest climber guy use one of those? (It was fitted to his tongue if I remember correctly.)


paulhanke;78047 wrote:
... possibly ... but in the case of belief networks, it would be a distinctly non-human consiousness (the human mind notoriously sucks at intuiting conditional probabilities)


At a conscious level, that is usually the case, but "under the hood," that is not necessarily true:

Bayesian brain - Wikipedia, the free encyclopedia

paulhanke;78047 wrote:
... but I would also suggest that this artificial brain would need to be given embodiment and put in charge of it's own survival before it would have any chance of exhibiting consciousness - without the responsibility of self-perpetuation, there doesn't seem to be any reason for it to develop a sense of "self" and "other" ...


You are using a kind of behaviorist approach to intelligence which has pretty much fallen apart.
richrf
 
  1  
Reply Fri 17 Jul, 2009 09:07 pm
@paulhanke,
paulhanke;78047 wrote:
given that such a simple example as this is layered, I would hazard to guess that conscious experience is pretty far removed from neurons in terms of intermediate emergent layers ... so a small network of disembodied neurons is probably not enough to realize many of the intermediate emergent layers that pave the road to mind, let alone conscious experience ...


Rupert Sheldrake, a British biologist/scientist, suggests that all in the universe is evolving and the evolving forms are contained with morphic fields (energy fields with shape) that consist of patterns that govern the development of forms, structures and arrangements.

This arrangement not only allows for evolution of forms and structures but also of the laws that govern them.

For me, consciousness uses energy to create these morphic fields, one of which, for example, would be a brain that can transmit and receive the form representations.

This is a fascinating theory that creates a bridge between consciousness, energy, form, and matter. He has written several papers on this subject.
paulhanke
 
  1  
Reply Fri 17 Jul, 2009 09:20 pm
@odenskrigare,
odenskrigare;78052 wrote:
At a conscious level, that is usually the case, but "under the hood," that is not necessarily true:

Bayesian brain - Wikipedia, the free encyclopedia


... ah - if you're talking about using Bayesian mathematics at a substrate level as a functional approximation (improvement?) of basic brain functioning, that's one thing ... but "belief networks" are abstractions/models of higher-level conscious reasoning, are they not?

odenskrigare;78052 wrote:
You are using a kind of behaviorist approach to intelligence which has pretty much fallen apart.


... actually, I'm using an autopoietic approach to the emergence of meaning, which isn't even in the same ballpark as behaviorism ...
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 09:32 pm
@richrf,
richrf;78054 wrote:
Rupert Sheldrake, a British biologist/scientist, suggests that all in the universe is evolving and the evolving forms are contained with morphic fields (energy fields with shape) that consist of patterns that govern the development of forms, structures and arrangements.


Oh nice learn something new every day:

morphic resonance - The Skeptic's Dictionary - Skepdic.com

Rupert's Resonance: Scientific American

[indent]Third, in 2000 John Colwell of Middlesex University in London conducted a formal test using Sheldrake's experimental protocol. Twelve volunteers participated in 12 sequences of 20 stare or no-stare trials each and received accuracy feedback for the final nine sessions. Results: subjects could detect being stared at only when accuracy feedback was provided, which Colwell attributed to the subjects learning what was, in fact, a nonrandom presentation of the trials. When University of Hertfordshire psychologist Richard Wiseman also attempted to replicate Sheldrake's research, he found that subjects detected stares at rates no better than chance.[/indent]

How surprising
0 Replies
 
paulhanke
 
  1  
Reply Fri 17 Jul, 2009 09:33 pm
@richrf,
richrf;78054 wrote:
For me, consciousness uses energy to create these morphic fields, one of which, for example, would be a brain that can transmit and receive the form representations.


... I'm with ya up to this point ... I just don't see any reason to posit that consciousness is anything but one of the multitude of forms/structures/laws that have emerged/evolved over time Wink ...
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 09:35 pm
@paulhanke,
paulhanke;78057 wrote:
... ah - if you're talking about using Bayesian mathematics at a substrate level as a functional approximation (improvement?) of basic brain functioning, that's one thing ... but "belief networks" are abstractions/models of higher-level conscious reasoning, are they not?


Yes, but my point is that the brain is known to observe Bayesian reasoning behaviors in some respects, if not consciously

paulhanke;78057 wrote:
... actually, I'm using an autopoietic approach to the emergence of meaning, which isn't even in the same ballpark as behaviorism ...


Ok since I don't know what that means, I'm simply going to ask why self-preservation is necessary to self-awareness

I'm pretty sure kamikaze pilots were self-aware...

---------- Post added 07-17-2009 at 11:38 PM ----------

paulhanke;78059 wrote:
... I'm with ya up to this point ... I just don't see any reason to posit that consciousness is anything but one of the multitude of forms/structures/laws that have emerged/evolved over time Wink ...


The description in that post was too vague for me to really get a clear picture of in my head, then I looked at the Wikipedia article as well as Skeptic's Dictionary and a link from there to an article by Michael Shermer and I have to say MR is not terribly convincing.
paulhanke
 
  1  
Reply Fri 17 Jul, 2009 10:10 pm
@odenskrigare,
odenskrigare;78060 wrote:
Ok since I don't know what that means, I'm simply going to ask why self-preservation is necessary to self-awareness


... of what evolutionary value is a sense of self in the absence of any responsibility for self-perpetuation? ... if I program a machine to play chess and pamper it with the utmost care (say, "Big Blue"), what reason does it have to become self aware? ... on the other hand, if I program the machine to monitor its operating parameters and give it the ability and the goal to take action to ensure its continued survival, arm it with an evolutionary module to evolve new behaviors in response to novel threats, and dump it in a challenging environment, I think that machine has a pretty good reason to evolve an ontology of "self" and "other" ...

odenskrigare;78060 wrote:
I'm pretty sure kamikaze pilots were self-aware...


... ah, but now we're talking about a social animal that has instincts to balance the perpetuation of self with the perpetuation of its offspring/tribe, yes? ...
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 10:17 pm
@paulhanke,
paulhanke;78063 wrote:
... of what evolutionary value is a sense of self in the absence of any responsibility for self-perpetuation?


I can think of some, but that's not the point.

Self-preservation is conducive to developing self-awareness, but not necessary.

Especially in the case of an artificial brain.

paulhanke;78063 wrote:
... ah, but now we're talking about a social animal that has instincts to balance the perpetuation of self with the perpetuation of its offspring/tribe, yes? ...


Ok, different approach:

"I'll bet people who commit suicide are often (painfully) self-aware."
paulhanke
 
  1  
Reply Fri 17 Jul, 2009 10:51 pm
@odenskrigare,
odenskrigare;78065 wrote:
I can think of some, but that's not the point.

Self-preservation is conducive to developing self-awareness, but not necessary.

Especially in the case of an artificial brain.


... you may have a point there ... in principle, I could make an exact functional copy of my brain in-silico and it would be self aware ... but is this the same thing as developing self awareness? - or am I merely copying an existing instance of it? ... and is this relevant to the question of self awareness emerging out of any arbitrary collection of artificial/biological neurons (and not just appearing in exact functional copies of existing instances of self awareness)?

odenskrigare;78065 wrote:
Ok, different approach:

"I'll bet people who commit suicide are often (painfully) self-aware."


... things go wrong in animals all the time, both physical and psychological ... deformity, injury, disease, mutation, etc. ... if suicide were the norm, you'd have a point ...
odenskrigare
 
  1  
Reply Fri 17 Jul, 2009 10:57 pm
@paulhanke,
paulhanke;78068 wrote:
... you may have a point there ... in principle, I could make an exact functional copy of my brain in-silico and it would be self aware ... but is this the same thing as developing self awareness? - or am I merely copying an existing instance of it? ... and is this relevant to the question of self awareness emerging out of any arbitrary collection of artificial/biological neurons (and not just appearing in exact functional copies of existing instances of self awareness)?


I'll cut to the chase and say that any sufficiently advanced brain hooked up with sensors and at least enough effectors to give itself an impression that it has a real presence in the world will definitely exhibit self-awareness

e.g., eyes to see, legs, or wheels or whatever to get in front of a mirror

This has nothing to do with self-preservation

paulhanke;78068 wrote:
... things go wrong in animals all the time, both physical and psychological ... deformity, injury, disease, mutation, etc. ... if suicide were the norm, you'd have a point ...


I didn't say it was "the norm", and that's not the issue.

Each year, approximately one million people die of suicide. I imagine that nearly all of them are, or were, self-aware.

Also mutation is essentially productive.
paulhanke
 
  1  
Reply Sat 18 Jul, 2009 10:44 am
@odenskrigare,
odenskrigare;78069 wrote:
I'll cut to the chase and say that any sufficiently advanced brain hooked up with sensors and at least enough effectors to give itself an impression that it has a real presence in the world will definitely exhibit self-awareness

e.g., eyes to see, legs, or wheels or whatever to get in front of a mirror

This has nothing to do with self-preservation


... okay, I'll meet you halfway, as it seems I have been harboring a biological bias with respect to the emergence of meaning Wink ... so consciousness is not an emergent of any arbitrary batch of neurons, but rather is an emergent of neurons hooked up to sensors and effectors (i.e., embodied) within a larger playing field (i.e., world) ... on the other hand, self-perpetuation is not needed to create the meaning we associate with the words "self" and "other" - merely being embodied in a world is enough for a sufficiently advanced "brain" to establish this meaning ... sound agreeable so far? Smile

odenskrigare;78069 wrote:
I didn't say it was "the norm", and that's not the issue.

Each year, approximately one million people die of suicide. I imagine that nearly all of them are, or were, self-aware.


... I think I see where you're going with this ... your argument is that if self awareness developed in response to the will to live, then when the will to live disappears self awareness should also disappear ... and while I think this is moot to this thread at this point (see above), I would hazard to say that the argument is false ... consider a boy who practices with an abacus every day ... before he started, he barely knew 1+1=2; now he can perform incredible calculations ... now take the abacus away ... is the boy back to barely knowing 1+1=2? - or can he move his hands and manipulate an internalized abacus to perform incredible calculations? ... that is, can you make something that has developed in response to a stimulus instantly disappear merely by removing the stimulus? - especially if that something has become self-reinforcing?

odenskrigare;78069 wrote:
Also mutation is essentially productive.


... actually, the vast majority of mutations are destructive, if not fatal ... it's only once in a blue moon that a mutation has adaptive value ...
0 Replies
 
richrf
 
  1  
Reply Sat 18 Jul, 2009 11:19 am
@paulhanke,
paulhanke;78059 wrote:
... I'm with ya up to this point ... I just don't see any reason to posit that consciousness is anything but one of the multitude of forms/structures/laws that have emerged/evolved over time Wink ...


I guess I see consciousness creating these forms/structures/laws, just like consciousness creates a painting.

Rich
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
morals and ethics, how are they different? - Question by existential potential
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
 
  1. Forums
  2. » *new* mind is more than brain (???)
Copyright © 2021 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 11/29/2021 at 02:57:52