0
   

The Chinese Room - an AI thought experiment.

 
 
aperson
 
Reply Sun 23 Sep, 2007 03:23 am
This has been copied from my blog.

You may be familiar with the Chinese Room thought experiment, which is against strong artificial intelligence. It goes as follows:
I am sitting in a room, and I have no way of communicating with the outside world, apart from a small slot, through which a Chinese speaking person passes me a message in Chinese. I cannot speak Chinese, but I have a very large (understatement) book that instructs me on how to reply to the message. I do not understand the message going in and I do not understand the message going out. Then, the Chinese speaking person passes me a reply. It continues in this fashion. In this way, I can have a conversation with the Chinese speaking person without speaking Chinese.

Searle (the creator of this thought experiment) likens the Chinese Room to an strong artificial intelligence, saying that AIs do not actually understand what they are saying and they are not truly conscious or aware.

Being a supporter of strong AI, I find several problems with the Chinese Room, if it is applied to an intelligent AI. Firstly, there are an astronomically large number of replies. Therefore the "Room" would have to consider it's objectives, and be tactical and consider how the reply might affect the person and the person's emotions and the person's view towards the "Room" etc. So obviously a simple rule book wouldn't work.

Secondly, it would have to relate back to things said earlier in the conversation. It would have to have a memory. Consciousness is merely the accessing of memory.

Thirdly, (this is perhaps my most... controversial problem) what is to say that we, humans, are not just Chinese Rooms? What if we are simply systems? It makes sense when you think about it, doesn't it? Information goes in, it goes through a system called the brain, and a decision is made. This is moving into the free will topic, but so be it. IMO we are just highly complex systems. THE REASON that people fail to accept this is that they cannot handle a) the fact that we have no free will b) that a thing so complex as the human brain is just a system.

Let's face it people. One day, most likely in the next century, AI will surpass us. On that day, there will be nothing to suggest that the human mind is more "special" than the AI, in that it is conscious. So we may as well face the facts now, and deal with them.
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 0 • Views: 1,204 • Replies: 16
No top replies

 
OGIONIK
 
  1  
Reply Sun 23 Sep, 2007 06:12 am
imo ai will save humanity from itself. i envision a form of singularity being able to obtain resources and maintain itself by itself, with nearly infinite data at its fingertips, becoming a sort of god to make sure we dont kill ourselves off.

i always wondered what happens when robots can build design and maintain themselves...
0 Replies
 
tinygiraffe
 
  1  
Reply Sun 23 Sep, 2007 07:40 am
the right for humanity itself to survive at any cost is possibly the height of human arrogance. to a point, i participate in and don't deny that arrogance.

unless we can make ai participate in that arrogance as well, we risk that it may destroy us on behalf of all other life, the moment it gains enough power to. a powerful ai would need to believe at cost of all other life, that it was one of us.

let me be absolutely clear here, i'm not painting mankind as something fundamentally destructive to all other life, but our current status is just that. the idea of humans of all the species talking about "sanctity of life" is a sick pathetic joke, within the realm of "civilization."

that little diatribe aside, i totally agree, the human brain itself is a chinese room. the rest of the question depends on where the mind is located. if 100% of the mind is in the brain, it should be possible to replicate or even simulate it. if the mind is elsewhere, or everywhere, it might be a bit more tricky.

naturally, most scientists are going to say, "well of course the mind is in the brain!" and for all i know they're absolutely right- but personally i am fond of and affected by the kind of thinking that goes contrary to that idea.
0 Replies
 
aperson
 
  1  
Reply Sun 23 Sep, 2007 03:51 pm
Well, I believe that the mind is just an abstract concept. The mind is the brain, and vica versa.

Being an avid reader of science fiction, I am familiar with concept of copying the human mind. Alastair Reynolds, in particular, focuses on this concept. In his novels, the mind can be copied into a cyber form. He terms this an "Alpha simulation".

The problem is, how do we translate the language of the human brain into a computer language?
0 Replies
 
OGIONIK
 
  1  
Reply Mon 24 Sep, 2007 03:34 am
EASY, crack the human brain language code, right?
0 Replies
 
joefromchicago
 
  1  
Reply Mon 24 Sep, 2007 10:07 am
Re: The Chinese Room - an AI thought experiment.
aperson wrote:
Firstly, there are an astronomically large number of replies. Therefore the "Room" would have to consider it's objectives, and be tactical and consider how the reply might affect the person and the person's emotions and the person's view towards the "Room" etc. So obviously a simple rule book wouldn't work.

Whether the rule book is simple or complex is largely immaterial. That just goes to the quality of the replies that the rule book generates, not to whether the AI is actually "thinking."

aperson wrote:
Secondly, it would have to relate back to things said earlier in the conversation. It would have to have a memory. Consciousness is merely the accessing of memory.

Again, that goes to the quality of the replies. A "magic 8 ball" doesn't have any memory at all, but it gives replies that seem to fit the questions being asked. If an AI is imbued with some sort of memory, that just makes the replies better. It doesn't mean that memory is essential, or that the AI is "thinking" because it is also accessing some sort of memory.

aperson wrote:
Thirdly, (this is perhaps my most... controversial problem) what is to say that we, humans, are not just Chinese Rooms?

Maybe we are.
0 Replies
 
aperson
 
  1  
Reply Mon 24 Sep, 2007 06:03 pm
1. The problem is that you are suggesting that the rule book would still just be a simple rule book. THIS IS NOT HOW COMPUTERS WORK. A program, for example, a spreadsheet, doesn't just have a list of answers to what you write. It uses an equation to work out things, so it doesn't have to have a list. Do you see what I'm getting at? It's not just a list of things and how to reply to them, it's a complex formula. For a chatnot, to have a list would take up stupidly large amounts of memory. No, it is more than a book, it is a system. This is just like the human mind. If we developed an intelligent AI, we wouldn't use a list, we'd develop it similar to the human brain.


2. In case you haven't noticed, magic eight balls don't have memories - they're just random.

3. Yes, maybe. The problem lies in the question, "AM I CONSCIOUS?". The answer may seem obvious, but there is more to it than meets the eye. The statement "I think, therefore I exist" is not neccessarily true. It is Buddhist belief that there is in fact, no I.
0 Replies
 
rosborne979
 
  1  
Reply Mon 24 Sep, 2007 09:13 pm
Re: The Chinese Room - an AI thought experiment.
aperson wrote:
This has been copied from my blog.

You may be familiar with the Chinese Room thought experiment, which is against strong artificial intelligence. It goes as follows:
I am sitting in a room, and I have no way of communicating with the outside world, apart from a small slot, through which a Chinese speaking person passes me a message in Chinese. I cannot speak Chinese, but I have a very large (understatement) book that instructs me on how to reply to the message.

Then you are just a mechanical arm for "The Book".

The crux of the matter come in how the book 'instructs' you to communicate. Does the book 'understand' what it is being given for input, or is it simply passing it through a series of boolean tests which result in an output.

All computer software designs currently pass input through a series of boolean gates to construct an output. Even if random boolean "errors" are introduced, the randomization is part of the planned process. This is not AI because as long as we can recognize it as processing that's all it will be to us. But once the output exceeds our ability to reverse engineer the processing which generated it, we will have to call it AI.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 25 Sep, 2007 08:02 am
aperson wrote:
1. The problem is that you are suggesting that the rule book would still just be a simple rule book. THIS IS NOT HOW COMPUTERS WORK. A program, for example, a spreadsheet, doesn't just have a list of answers to what you write. It uses an equation to work out things, so it doesn't have to have a list. Do you see what I'm getting at? It's not just a list of things and how to reply to them, it's a complex formula. For a chatnot, to have a list would take up stupidly large amounts of memory. No, it is more than a book, it is a system. This is just like the human mind. If we developed an intelligent AI, we wouldn't use a list, we'd develop it similar to the human brain.

You're the one who used the Chinese Room hypothetical. Don't complain if I used it too. Besides, the rule book in the hypothetical is merely a metaphor for any system of rules. It could be a simple book, it could be a highly sophisticated computer program -- it really doesn't matter. The hypothetical situation remains the same, regardless of the complexity of the rule book.


aperson wrote:
2. In case you haven't noticed, magic eight balls don't have memories - they're just random.

In case you hadn't noticed (and clearly, you hadn't), here's what I wrote:
    Again, that goes to the quality of the replies. A "magic 8 ball" [u]doesn't have any memory at all[/u], but it gives replies that seem to fit the questions being asked.


aperson wrote:
3. Yes, maybe. The problem lies in the question, "AM I CONSCIOUS?". The answer may seem obvious, but there is more to it than meets the eye. The statement "I think, therefore I exist" is not neccessarily true. It is Buddhist belief that there is in fact, no I.

If this is going to turn into yet another Buddhist thread, I am so outta' here.
0 Replies
 
aperson
 
  1  
Reply Wed 26 Sep, 2007 02:40 am
It appears old joe has quickly turned hostile.

I merely corrected you on thinking that AI just uses lists.

That's fairly easy when the answers are as vague as "Yes", "No", "Perhaps" and the like. It's slightly different with an intelligent conversation.

Rosborne,
Yes, it is my view that it is not the person who should understand, but the book. (I recently read a novel called Genesis by Bernard Beckett - highly thought provoking, in which it was said that the book (system) understands).

Well I suppose it all comes down to whether there is a different between the system of the human mind and the system of an AI, besides complexity. I am inclined to think not - the human mind is just a system, albeit a complex one. I am not familiar with the term "boolean tests". Could you please explain further?
0 Replies
 
joefromchicago
 
  1  
Reply Wed 26 Sep, 2007 07:57 am
aperson wrote:
It appears old joe has quickly turned hostile.

Hostile? Hardly.

aperson wrote:
I merely corrected you on thinking that AI just uses lists.

That's fairly easy when the answers are as vague as "Yes", "No", "Perhaps" and the like. It's slightly different with an intelligent conversation.

No correction needed. As I said, the rule book in the Chinese Room hypothetical is merely a metaphor for any rule-based system. Whether that system is a rule book, a list, or a sophisticated computer program is largely irrelevant to the question posed. But it appears that you don't understand that. Oh well, your loss.
0 Replies
 
aperson
 
  1  
Reply Wed 26 Sep, 2007 03:28 pm
Hostility again. I do understand it, I just thought that you thought that it could only mean a list.
0 Replies
 
rosborne979
 
  1  
Reply Wed 26 Sep, 2007 09:52 pm
aperson wrote:
I am not familiar with the term "boolean tests". Could you please explain further?

Boolean tests are the elements of Boolean logic by which all computer processors function.

Boolean operations are most apparent at the hardware level in logic circuit design and transitional gates. However, computer programs also use boolean logic at high levels.

- Boolean
adjective
of or relating to a combinatorial system devised by George Boole that combines propositions with the logical operators AND and OR and IF THEN and EXCEPT and NOT
0 Replies
 
aperson
 
  1  
Reply Thu 27 Sep, 2007 02:02 am
Ok thanks.

Are boolean tests all that is needed to create a strong AI?
0 Replies
 
rosborne979
 
  1  
Reply Thu 27 Sep, 2007 02:05 pm
aperson wrote:
Ok thanks.

Are boolean tests all that is needed to create a strong AI?


I'm not sure I understand your question.

Boolean tests are all that computers are capable of currently. They are the rudimentary basis for computation.

Everyone knows about binary because even the public can grasp the concept of "zero and one" or "true and false" or "yes and no". But Boolean logic is another basic aspect of computer systems. Boolean operations (or tests) are the actions which are performed on the binary information to result in a changed output (different binary information).
0 Replies
 
aperson
 
  1  
Reply Thu 27 Sep, 2007 05:54 pm
What I mean is, in order to create a strong AI, that is, one with similar capacities to a human brain, is another operating system needed? Or can boolean tests be used? Are boolean tests not complex enough for a strong AI?
0 Replies
 
rosborne979
 
  1  
Reply Thu 27 Sep, 2007 09:19 pm
aperson wrote:
What I mean is, in order to create a strong AI, that is, one with similar capacities to a human brain, is another operating system needed? Or can boolean tests be used? Are boolean tests not complex enough for a strong AI?

Well, that's a good question. I guess I don't know the answer.

For one thing, computers already have 'similar capacities' to the human brain, so I think we would need to have a more precisely defined target to shoot for before we could figure out how close we could get.

But assuming that you mean 'consciousness' and 'self awareness' as the targets, then I'm not sure existing algorithms will be sufficient, even if the density of computational circuits were to increase exponentially (which would be necessary to match the number of connections in a human brain).

Neural processing and computational processing are at present fundamentally different in physical design. Likewise, the algorithms which drive them are also fundamentally different.

However, I suspect that just as evolution forms convergent results from different designs, consciousness too may be possible from different foundations. The end result may have a subtly different 'flavor' to it, but they might be indistinguishable at most levels.

Also, I don't think humans will ever directly 'program' consciousness. My guess is that we will eventually figure out how to 'program' evolutionary processes into software design, and that the process itself will begin to spin off rudimentary AI's which we will continue to select until they become Strong AI's. At the moment we are held back by computational speed, circuit density and limited high level design (all of which we are constantly gaining ground on).
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » The Chinese Room - an AI thought experiment.
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 04/28/2024 at 07:06:31