@odenskrigare,
Logical behaviorism applies to artificial intelligence. Just what is logical behaviorism? Well, basically, a digital system (for example, the software this "brain" will be built upon) computes conditional "if" statements which then are recognized by "then" statements (this is how most coding works). Let's take an example:
Here's something very rudimentary that could possibly be programmed into this "brain":
IF:
"****"
THEN:
Sad [there would be behind the scenes coding for the reaction "sad"]
So, basically, if you cursed at the "brain", it would respond with sorrow. I understand this is very rudimentary, so now imagine millions of these "if" "then" conditional processes programmed into the software. Yes, millions (I'm going to suppose this "brain" will be increasingly complex). Now, the brain will be able to respond, and probably even learn (an increase in recognitional capacity), thousands of different things - in essence, it
will appear almost human. It probably would behave sort of like a 3-6 year old human would, except with more "knowledge". If you do something to it, it will react in a certain way, if you ask it something, it will provide the answer - this is simply how the processes work. Digital systems are governed by a set of rules, codings, and processes.
Now let us consider if this is similar to the human consciousness.
First, does this digital system bear intentionality?
No. It does not intentionally react the way it does; it is not aware as a human is aware. It
has to react the way it does,
by definition of a digital system. It's important to understand that software does not hold any
semantic capacity, it holds
recognitional capacity. What this means is that the software does not have the capacity for the understanding of
meaning. It can react with "sadness", a programmed response, but it cannot consider the meaning of this response. It does not question, or share qualia, like human consciousness can. No original thoughts are generated - they simply cannot by definition. And without semantic capacity, you do not have consciousness as we regard it in humans. Recognitional capacity is
not consciousness. Logical behaviorism applies to artificial intelligence, but logical behaviorism does not apply to humans. Humans are not governed by logical "if" "then" statements.
Keep in mind that some disagree with me (
Daniel Dennett - Wikipedia, the free encyclopedia), but this argument against physicalism was made by John Searle (
John Searle - Wikipedia, the free encyclopedia).
With this said, I probably have those fighters for humanity cheering me on, but let me be very, very clear: I'm not saying it's
impossible that a human replica will be made one day. On the contrary, I think it's very likely one will. However, it will
not be through a digital simulation; as noted, it simply cannot
by definition of a digital system. This simulation will probably appear human, but it simply will not be human. Once we actually do come to the point where we completely replicate the brain, we will not call it artificial intelligence, and it will not be a digital system. It will simply be intelligence, and it will simply be human.