@Pathfinder,
Pathfinder;91956 wrote:So did you just say that some realm of science has endeavored to devise some sort of robotic brain that is attempting to understand the nuances of the human consciousness? And that at this point they have not been able to have much success?
My question still remains I think:
What exactly is it that they have designed?
... just to make sure we're on the same page on this, it is not the robotic brain that is attempting to understand the nuances of human consciousness, but rather humans who are attempting to understand the nuances of human consciousness by constructing robots with brains ... as for what has been achieved thus far, here it is in four acts:
Act I: a bunch of scientists introspect about how they think and hypothesize that conscious intelligence is symbol processing ... eventually, an artificial symbol-processing intelligence is constructed that beats the world chess champion ... however, by the time this occurs it has become obvious that this approach will not result a general human-like intelligence - any meaning or significance is that of the designers, not the intelligence ... symbol processing loses interest as a path toward understanding human intelligence, and moves off into industrial applications.
Act II: the next wave of scientists hypothesize that conscious intelligence can emerge from neuronal information processing ... eventually, artificial intelligences are designed and/or evolved to recognize anomalies in mammograms and to recognize (and produce) speech - yet again, however, any meaning or significance is that of the designers, not the intelligence (and yet again, interest moves off into industrial applications).
Act III: the next wave of scientists hypothesize that conscious intelligence can emerge from embodied neural nets ... it is discovered that isolated simple behaviors coupled together in a body and acting in a world can result in the emergence of complex behavior such as locomotion ... it is also discovered that what were thought to be complex computational problems (e.g., locomotion) could actually be vastly simplified by having a body in a world - things like gravity and joint tension can result in "morphological computation" that can offload computation away from the brain ... once again, unfortunately, the meaning/significance is that of the designers, not the intelligence (if you run a robotic puppy on a treadmill, you are providing it with meaning and significance - i.e., what is meaningful and significant is running on a treadmill).
Act IV: Enactive AI is proposed as an approach to get away from "extrinsic teleology" (meaning and significance supplied by the designer) and move toward "intrinsic teleology" (meaning and significance that arises within the intelligence) ... one way to proceed forward in this approach is to make the intelligence responsible in some way for its own existence ... give it the ability to learn and let it loose in the wild, and it will either die or it will learn how to maintain itself ... and in so doing, what will be learned is perception and action that is meaningful and significant
for the intelligence ... that is, the trick to designing an artificial intelligence that displays "intrinsic teleology" is to
take the designer out of the equation! .