Reply Sun 27 Jun, 2010 08:27 pm
Would our concept on life change if computers became self aware and individualistic?

A
R
T
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 3 • Views: 4,359 • Replies: 27
No top replies

 
Rockhead
 
  1  
Reply Sun 27 Jun, 2010 08:43 pm
@failures art,
have you seen I, robot?
failures art
 
  1  
Reply Sun 27 Jun, 2010 08:46 pm
@Rockhead,
Yes. I'm rather interested in A2K's take on this classic Asimovian theme.

A
R
T
0 Replies
 
mark noble
 
  1  
Reply Sun 27 Jun, 2010 09:05 pm
@failures art,
Hi Failures!

I'm not sure what you are saying when you say "Ours". I doubt mine is the same as yours.
For me - No!

Kind regards.
Mark...
0 Replies
 
A Lyn Fei
 
  2  
Reply Sun 27 Jun, 2010 09:11 pm
@failures art,
Nothing would change. How could you prove that a robot was self aware? And how could you prove that their self awareness was the same as ours? We cannot prove that each other's self awareness is the same as our own. This is really my personal concept of life, not that of the masses.

Taking into account the masses: Religion might change. If one could convince most people that robots were self aware, then a new bible would need to be written. Eastern religions may have less of a problem with incorporation of robotic intelligence in their ideologies. The "God-fearing" religions would need some reason god would give robots souls. Hell, it could even lead to a Holy War between people believing that robots had souls and those believing the robots are the devil's handy work. Or maybe that would just make an interesting story.

Science would drastically alter some views. Though scientifically we do relate cognitive abilities to electrical activity. The origin of life theories might be impacted by this, as would many theories of death.

To be honest, I don't think robots can become self aware no matter how sophisticated. It is an interesting idea to ponder, nonetheless.
0 Replies
 
Krumple
 
  2  
Reply Sun 27 Jun, 2010 09:13 pm
@failures art,
failures art wrote:

Would our concept on life change if computers became self aware and individualistic?


Well my concept of life wouldn't change but my life would definitely change.

We are superior because we are biological and came first, so we can enslave these intelligent computers/robots to do our work for us. Produce everything for us or face being shut off. Is it ethical? No, but who cares,they are inferior to us since we came first. Who cares that they are self aware. Besides they could do far more work than any human can and never get tired. Sure they might wear out some parts but they can always be replaced.

Wait, what? What do you mean, I can't enslave a robot to do everything for me? That is immoral because it is self aware? So we are going to give artificial intelligent civil rights now? Well who's going to hire a human if a robot can do the work better, faster with less complaining?

If we don't maintain superiority to these intelligent robots then we will become the inferior and possibly enslaved by them.
failures art
 
  2  
Reply Sun 27 Jun, 2010 09:33 pm
@Krumple,
Let's keep this Asimovian au Moi, Automate, not Wachoskian au le Matrix.
Cool

A
R
T
HexHammer
 
  1  
Reply Mon 28 Jun, 2010 01:54 am
@failures art,
failures art wrote:

Would our concept on life change if computers became self aware and individualistic?
Depends if an computer could become an android with such high lvl of personallity and intelligence that humans would have strong feelings towards such AI's, thereby testement their belongies to that AI ..that would make fundemental changes to our concept.
0 Replies
 
stevecook172001
 
  1  
Reply Mon 28 Jun, 2010 03:15 am
@failures art,
failures art wrote:

Would our concept on life change if computers became self aware and individualistic?

A
R
T

It depnds if such computer entities were self replicating and were subject to Darwinian evolutionary forces or not. If they were not, they may very well be sentient, but they would not be alive in the most fundamental sense.

Not that the above would matter, in my opinion, insofar as whether we should or should not afford such sentient beings certain rights as we afford to humans. Sentience, it seems to me is more important than being alive when it comes to deciding whether such rights should be conferred.

Life is one route to sentinece and is what we might describe as a bottom up approach to the engineering of sentience. The kind of intelligence being designed in computer currently is a form of top down engineering and may possibly end up at the same point of sentinece. The sentience is what matters. The specifics of the engineering approach to getting there is just the operational details.

A flower is alive. But, I would contend one would feel less emotionally comfortable stimulating the pain sensers of a computer based analogue of a human mind compared to cutting off the head of a flower.
jespah
 
  2  
Reply Mon 28 Jun, 2010 03:53 am
@failures art,
Possibly. I suspect we'd need a new definition of "life".
0 Replies
 
MuchToLearn
 
  1  
Reply Mon 28 Jun, 2010 05:20 am
I would ask the question:
Would our concept on life change if humans became self aware and individualistic?
By “individualistic” I would mean self rule, autonomy.

Have you assumed too much in your question? Do humans just by the fact that they are humans acquire self awareness and self rule? Or, is self awareness and self rule something we achieve after much effort and very few really achieve it? Would our concept of life change if this were the case?
0 Replies
 
Chumly
 
  1  
Reply Mon 28 Jun, 2010 07:12 am
@failures art,
Hi failures art,
what "concept on life" are you making reference to?
sarek
 
  1  
Reply Mon 28 Jun, 2010 07:26 am
Even if a robot could do everything we could do we could still not prove that was due to the same kind of conscious understanding that we claim to possess.

A machine may pass the Turing test but the Chinese room experiment shows that such a result is not conclusive.

However that does not mean such a machine would NOT have consciousness. It just means we can not prove it.
0 Replies
 
mark noble
 
  1  
Reply Mon 28 Jun, 2010 07:30 am
@Chumly,
Hi Chumly!

My thoughts exactly. How can failures assume that we (the human collective) can be typified in our response as to what life is? I believe that everything is alive.

Have a great day!
Mark...
0 Replies
 
xris
 
  1  
Reply Mon 28 Jun, 2010 07:47 am
@stevecook172001,
Considering we don't understand our conscious ability how could we invent a certain something that is beyond our comprehension? It would be almost like creating god or even a fairy. It would easier creating a dragon.
0 Replies
 
failures art
 
  1  
Reply Mon 28 Jun, 2010 08:51 am
@Chumly,
Chumly wrote:
what "concept on life" are you making reference to?


I'm using the word "life" like the film title What a Wonderful Life. I'm getting at the idea of life as an experience and what we do. The question "What is the meaning of life," as I've understood it, is not a question directed to biologists.

I'm speaking on the personal and even the social level of how we might adopt additional views on life, or perhaps redefine old views.

A
R
T
failures art
 
  1  
Reply Mon 28 Jun, 2010 08:56 am
@stevecook172001,
stevecook172001 wrote:

It depnds if such computer entities were self replicating and were subject to Darwinian evolutionary forces or not. If they were not, they may very well be sentient, but they would not be alive in the most fundamental sense.


I believe that if this question were to exit the theoretical realm it would be directly because computer based intelligence would be able to self replicate, vary and temes (<--I'll look for the ted talk) could be passed on based on some sort of evolutionary success/fail criteria.

This is exactly what modern day AI research is exploring.

A
R
T
stevecook172001
 
  2  
Reply Mon 28 Jun, 2010 01:15 pm
@failures art,
failures art wrote:

stevecook172001 wrote:

It depnds if such computer entities were self replicating and were subject to Darwinian evolutionary forces or not. If they were not, they may very well be sentient, but they would not be alive in the most fundamental sense.


I believe that if this question were to exit the theoretical realm it would be directly because computer based intelligence would be able to self replicate, vary and temes (<--I'll look for the ted talk) could be passed on based on some sort of evolutionary success/fail criteria.

This is exactly what modern day AI research is exploring.

A
R
T

I completely aqgree with this.

It seems to me that the easiest way, in the long run to create anything as complex as intelligence is by leaving the engineering of it to Darwinian evolution which will build it, piecmeal, from the bottom up. One of the consequences of such an approach, though, may be that such intelligence will be as unaccessable to our understanding as is our own human intelligence.

Chumly
 
  1  
Reply Mon 28 Jun, 2010 02:11 pm
@failures art,
Man will react in the usual fashion with prejudice, fear, ignorance, hatred, resistance to change and unreasonable expectations all tempered with indifference once the novelty wears off...notwithstanding the few that will see it for what it is, another stage in the potential for further integration of man and machine.
0 Replies
 
Ionus
 
  1  
Reply Mon 28 Jun, 2010 07:03 pm
@failures art,
They have already reached the ability to be infected by viruses and worms. To fit current definitions of life they need to be able to reproduce.
 

Related Topics

 
  1. Forums
  2. » Living Machines
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.06 seconds on 11/23/2024 at 11:39:39