0
   

Religiosity and Artificial Intelligence

 
 
Reply Sat 7 Mar, 2009 01:59 pm
Without delving into the venture of determining the "location" of an AI system's (A) mind (Searle), let us consider the question of the grounds for saying of an AI system that it is religious.

1. Suppose strong-AI is possible.

2. If strong-AI is possible, then A has a mind in the same sense that some person P has a mind.

3. Thus, A and P have mental frameworks which fall under the same category, where all the mental properties that can be said to be part of P's mental framework can also be said to be part of A's mental framework.

4. Having a mind in the sense thus presumed is essential to saying of a thing that it is religious or is not religious.

5. If A and P have mental frameworks of the same category, then A and P have categorically the same criteria for religious predicates ("is faithful," "is obedient to God's will," "follows the path of the Lord," etc).

6. Therefore, if P can be said to be religious, then A can be said to be religious.

Polemical note: This means that AI systems can be said, intelligibly and coherently (sensibly), and, therefore truly or falsely. That is, propositions about the religiosity of AI-systems (1) have sense and (2) are truth-apt.

If you don't personally have a problem with this, then so be it. But I suspect that you might want to deny a premise whether or not you are religious. If you are religious, then you likely will want to deny a premise, point out something missing, or dogmatically assert that AI systems can never be religious in the same sense you are religious (though you accept the strong AI premise). If so, I'd like to see your dogma with the kind of frankness that is usual.
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 0 • Views: 754 • Replies: 4
No top replies

 
Aedes
 
  1  
Reply Sat 7 Mar, 2009 03:13 pm
@nerdfiles,
I don't have any particular problem with it, nor for that matter with the application of any kind of character predicate to an AI system indistinguishable from a human brain.

This stuff becomes much more problematic when considering moral issues, i.e. is the AI system sufficiently human that we should concern ourselves with its feelings.
nerdfiles
 
  1  
Reply Sat 7 Mar, 2009 03:25 pm
@Aedes,
Aedes;52382 wrote:
This stuff becomes much more problematic when considering moral issues, i.e. is the AI system sufficiently human that we should concern ourselves with its feelings.


Good! It is not at all apparent in my post, but by the nature of this far-reaching topic, I could've just as easily asked this question in the Ethics, or Mind, or Aesthetics, or what-have-you forum.

I really wanted to go the way of Ethics, but I figured this thread would get more heat in a "religion" forum.

So yes, it is clear that this argument is more general, and touching to more people, when we apply it to the conditions for saying that X is moral where X is a P or an A.

Under what conditions might we say that A is properly attributed with a moral predicate? Equally enticing is the question of the relationship between morality and religion (the Divine Command argument + strong AI).
0 Replies
 
Aedes
 
  1  
Reply Sat 7 Mar, 2009 07:48 pm
@nerdfiles,
It's a good question that I think is illustrated in some robot movies, like that one with Haley Joel Osment (A.I.). One of the most common critiques of the movie is that it's just hard to get wrapped up in a character whom you know is a robot. And it was true -- it was a schmaltzy movie that tugged at emotional strings, and yet if the main character had gotten crushed by a truck, I'm not sure I'd have cared too much.

Think about that example. Or think about C3PO's various encounters with disintegration, etc. In A.I. you have a cute kid of an actor whom you know is a real live human outside that movie, playing the character of an innocent little persecuted little boy robot. In other words, the movie is creating a story that could be told about a human -- but we know the character is a robot. And that little bit of information de-invests us emotionally.

That makes me think that our moral sense of humanity and human obligation does not hinge upon the brain or the intelligence -- it hinges upon a sense of shared identity. This extends in a sort of dilutional way to animals (dilutional in that it becomes less strong as we anthropomorphize less). This is why parents passionately love their kids even if they have severe mental retardation and could never achieve the type of "mind" as a convincing A.I. being. This is why we find the Nazi extermination of mentally handicapped kids to be a crime against humanity. Humanity consists in the "thing" of us too, not just the "being" of us.
Parapraxis
 
  1  
Reply Sun 8 Mar, 2009 02:40 am
@Aedes,
Quote:
This stuff becomes much more problematic when considering moral issues, i.e. is the AI system sufficiently human that we should concern ourselves with its feelings.


I have always thought that the distinguishing characteristic of computers and humans is that computers can share the qualities of a human only in so far as they have been programmed to do so, whereas the possibilities and complexities of human behaviour/emotion etc. is infinite. An easy retort to this is that it's possible humans are just as "programmed" as computers, and it probably undermines the advancement of computer systems to the point of emergent behaviour.

Just a thought.
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » Religiosity and Artificial Intelligence
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 05/18/2024 at 03:53:10