1
   

Robotic emotions

 
 
Reply Mon 13 Jul, 2009 10:01 pm
hello, everyone

I am writing a short story about a robot, but I can't really think of any good attributes to give the robot. I'm trying to make it as likely and believable as I can.
So, if robots were have emotions, etc. what would they have? What specific emotions, what would they appear like to the robot? How would the robot cope with them? How would these emotions affect the robot?
I am assuming, of course, that robots would have emotions, which likely won't ever be the case. But just for the sake of speculation, I'd be happy to hear your thoughts on this.
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 1 • Views: 3,396 • Replies: 42
No top replies

 
validity
 
  1  
Reply Mon 13 Jul, 2009 10:37 pm
@Smiley451,
Robots are purpose built and limited by cost. A builder robot, for example needs a different set of emotions, if any, than a medical assistant robot. To give a builder robot all emotions would be unnecessary and a waste of money. Although a builder robot may benefit from pride.
0 Replies
 
Caroline
 
  1  
Reply Tue 14 Jul, 2009 04:17 am
@Smiley451,
In The Hitchhikers Guide to the Galaxy the robot, Marvin was depressed which had a comical effect, but I guess that's not what you're looking for.
0 Replies
 
jeeprs
 
  1  
Reply Fri 17 Jul, 2009 05:21 am
@Smiley451,
Give it pain. Work out a system so when you whack it, it cries. Better still, make it sensitive to insults - say 'the X5U was much more subtle than you' and it cries. Unbeatable. Tricky to pull off, though.

I suppose the next thing would be self-healing. You whack it, it cries, but then the injury heals. You insult it, it gradually gets over it (although still harbours resentment). All of this will be difficult but the resentment algorithm will be a particular challenge, I feel.
Exebeche
 
  1  
Reply Mon 27 Jul, 2009 03:58 pm
@jeeprs,
jeeprs;77887 wrote:
Give it pain. Work out a system so when you whack it, it cries. Better still, make it sensitive to insults - say 'the X5U was much more subtle than you' and it cries. Unbeatable.


That's cracking me up.
More seriously however the question wether or not a robot could have an ability of feeling pain deserves some attention.
At first we might have to differentiate between feeling and perception.
A perception is something much more primitive.

The unicellular protist called Euglenadoes not have a single neuron and thus no ability of feeling anything that we would consider equivalent to pain.
However it has a primitive organ of perception, something that we could consider a prestage of a sense organ.
A particular spot causes chemical rections depending on sunlight, a reaction that automatically triggers a particular pattern of movement in its flagellum, which will always make it head for the sun. It does not feel any better when it heads for the sun, it's a kind of mechanical process of which this little organism profits because the energy from the light really has a positive impact on its metabolism.
Obviously, since it doesn't have any feelings, it was evolutionary selection that caused this automatism to appear.
Me as a human, i like the sun, but i don't just perceive it, seeing the sun is something that makes me feel well (especially in old cold Germany).
Could there be something like a robot that also likes the sun?
I mean that doesn't just perceive it, but really likes it?
If i think of the radio in my car, it shows a kind of autonomous behaviour: When the reception of the radio station is getting weaker, it will search for the next better frequency to connect to it.
Of course this also is one kind of perception. The perception is not registered by a consciousness of course, but neither is Euglena's.
If anyone finds Euglena's perception to be to primitive to be considered such, we have plenty of stages between Euglena and humans. For example a slug is still very primitive but it defintiely has perceptions.
It has evaluating mechanisms telling it wether an input is good or bad. (cutting the skin e.g. is bad).
A machine might also have an evaluation system like primitive organisms.
For example a solar driven machine could autonomously register that it needs more sun and according to this perception move to a different position.
The question is: Will the machine feel bad, when it doesn't get enough sun and run low on power?
If we give the machine an urge to stay charged by any means, will it feel bad when its battery is running low?
My guess is, it won't have more emotions than Euglena in the first place.
I don't think we can locate any emotions at this stage.
However what if the needs become more numerous and the different stages the machine can reach get more complex?
Let's say it has a perception as complex as a slug?
A slug doesn't have emotions but it shows clear pain symptoms.
At the moment there's not much evidence that machines could develop into a direction that makes them as sensitive as slugs because there is no need for them to be so sensitive.
But times change, and machines are not that mechanical anymore the way we are used to it.
Remember: There is a (relativy) new increasingly important concept applied to machines that is based on the logical structure of neuronal networks.
Computers are leaving the binary stage behind, in which they only had 'on' and 'off' position.
Research of swarm intelligence shows the highly emergent potential of information processing systems.
If you have any background information on emergent phenomenons you will agree that we have to expect surprising effects in terms of artificial intelligence.
When i say surprising i am not talking about 'surprised how soon they become humanlike', but i am talking about phenomenons we simply don't expect.
The next generation(s) of AI may have unexpected emergent properties.
Even if a machine does not have a consciousness like 'i feel bad', it might develop autonomous reactions of evaluating a condition and making decisions of its own, to change this condition.
This will still be completely different from any human feeling, because it takes an organically evoluted body to have organic emotions.
We must not forget that emotions are physical in the first place.
For this reason a machine will never have the SAME feeling as we do when we think e.g. 'this stinks'.
But in fact it might have a perception that triggers an urge.
From a logical perspective this will not be so much different.
I guess the philosophical branch to read more about it would be functionalism.
0 Replies
 
William
 
  1  
Reply Mon 27 Jul, 2009 04:27 pm
@Smiley451,
Recently I was watching a family/children movie call "The Last Mimsy" my daughter had rented. A future generation, capable of limited opportunities to contact the past, had taken nanotechnology to the nth degree and were desperately in need the dna of a human to restore the humanity they had lost. All they needed was a "tear", something they were incapable of or duplicating. Much of the movie was indeed the perception of the minds that created it, but this part I latched onto bigtime. Significant? Hmmm?

William
0 Replies
 
paulhanke
 
  1  
Reply Mon 27 Jul, 2009 05:45 pm
@Smiley451,
... give it joy ... give it sadness ... give it every positive emotion there is ... and then add empathy ... when it feels the misery in this world, it will try to help ... if it can, it will feel joy; if it can't it will feel sadness ... what are the chances it will succeed in this world without the negative emotions such as fear, hate, greed, envy? ... will it be so out of touch that it will only ever fail in its endeavors and sink into despair? ... or will it be such a shining light in our world that we will destroy it out of fear and hate and greed and envy? ...
Holiday20310401
 
  1  
Reply Mon 27 Jul, 2009 06:41 pm
@paulhanke,
Give the robot pride, then let it rise up, become a master in this world, and write about how humanity feels about that. Then write about how humans learn from robots instead of the other way around.

It would be an interesting case of irony to see how humanity would feel about a robot that grew it's own set of non-organic emotions that allowed itself to go beyond it's expected servitude to mankind.
0 Replies
 
jeeprs
 
  1  
Reply Mon 27 Jul, 2009 09:34 pm
@Smiley451,
well, joking aside, I will always argue that it is impossible to create consciousness, because consciousness is irreducible. So therefore I completely reject the materialist account of consciousness advanced by neuro-scientists, Dennett and the like, along with any attempt to engineer a device or object that possesses the attribute of consciousness. In my mind, consciousness is irreducibly subjective, and the subjective can never be manufactured. This pertains to the ancient distinction between created and uncreated things; the fundamental intelligence (nous or budhi) which corresponds to the basis of consciousness, is uncreated, which is to say, it is eternally existent. To wish to create an artificial intelligence, in this sense, is grotesque and simply serves to illustrate the debased understanding of the confused modern mentality who is attached to the objects of perception due to ignorance.
Holiday20310401
 
  1  
Reply Mon 27 Jul, 2009 10:13 pm
@jeeprs,
Consciousness is not ontologically reducible, and so long as once remains with the monistic, ontological qualia of consciousness, materialism sure, is no longer valid.

It is an interesting thought. If our consciousness is emergent, can there be emergents of emergents? And if so, why would consciousness so happen to be the irreducible emergent? I think this only points out the flaw or lack of understanding there is to emergent systems. It is as if they add another dimension to complexity entirely, another tangent. Why would it only reveal itself when matters of the system are relatively complex to a certain extent?

If we were an emergent, and we called ourselves transitory, what then of the intransitory? Could we easily measure such? Sorry, I'll mean to leave these questions to another thread even though I hate the idea of labeling threads with wordful boundaries and formal titles.
jeeprs
 
  1  
Reply Mon 27 Jul, 2009 10:44 pm
@Smiley451,
well I agree I think that is a major topic in its own right, but I like where you're heading with it. I just wanted to wave the flag for non-reductionism at that particular point lest the robots get uppity and forget their place:bigsmile:
0 Replies
 
TickTockMan
 
  1  
Reply Mon 27 Jul, 2009 11:07 pm
@Smiley451,
Smiley451;77135 wrote:
hello, everyone

I am writing a short story about a robot, but I can't really think of any good attributes to give the robot. I'm trying to make it as likely and believable as I can.
So, if robots were have emotions, etc. what would they have? What specific emotions, what would they appear like to the robot? How would the robot cope with them? How would these emotions affect the robot?
I am assuming, of course, that robots would have emotions, which likely won't ever be the case. But just for the sake of speculation, I'd be happy to hear your thoughts on this.


Watch the movie Bladerunner. All of these points were explored and beautifully handled here. Ideally, watch the director's cut without the lame Harrison Ford voiceover.
0 Replies
 
Exebeche
 
  1  
Reply Tue 28 Jul, 2009 06:57 am
@jeeprs,
jeeprs;79920 wrote:
well, joking aside, I will always argue that it is impossible to create consciousness, because consciousness is irreducible. So therefore I completely reject the materialist account of consciousness advanced by neuro-scientists, Dennett and the like, along with any attempt to engineer a device or object that possesses the attribute of consciousness. In my mind, consciousness is irreducibly subjective, and the subjective can never be manufactured. This pertains to the ancient distinction between created and uncreated things; the fundamental intelligence (nous or budhi) which corresponds to the basis of consciousness, is uncreated, which is to say, it is eternally existent. To wish to create an artificial intelligence, in this sense, is grotesque and simply serves to illustrate the debased understanding of the confused modern mentality who is attached to the objects of perception due to ignorance.


Your point of view is certainly respectable and i remember being an objector of Dennet's ideas myself. However this wall is crumbling.
After i realised that all the unresolvable problems i used to have are problems of ontological nature i really got rid of these ideas.
Anything ontological means a dead end to me, honestly. Ontology kills epistemology.
Sorry if this sounds radical.
What i criticise however is your reproach of confused modern mentality being attached to the objects of perception being a reason for wanting to create an AI.
Let me suggest you a different scenario:
According to Norbert Wiener (father of modern cybernetics) information is the third quantitiy of our universe, next to matter and energy.
According to Tom Stonier order always comes with an increase of the amount of information a system contains.
Order being a synonym for organisation and life being a result of self organisation the universe shows an amazing behaviour of accumulating information.
It's pretty common sense already that life will sooner or later be found in distant places from earth, and it wouldn't surprise anybody to find out that this life also has cognitive abilities.
Cognitive behavour is a phenomenon that appears even in non living systems.
The universe is subject to never ending entropy as we know, however its physics locally tends to cause exponential accumulation of information in a way that causes cognition to appear all over the universe (this is not scientifically proven, it's my contribution). The appearance of cognition happens necessarily due to the physics of thermodynamics of open systems.
In other words the universe is bound to create intelligence.
If the nature of the universe is at least partially self reflective (thus what is big is mirrored in the small and vice versa) one could say that the emerging of cognition is the universe' way of evoluting the ability of looking at itself.
The seed for intelligence is everywhere where you can locate self organisaton.
It's in the nature of the universe to reflect, observe and understand itself.
Who knows how far the variety of possible attempts reaches?
In developing AI i see a new branch of evolution growing.
...Careful: I am not saying a higher level of evolution, i am saying a new branch of evolution.
Sooner or later we will observe AI being able to see the world in a more enlightened way than humans do.
Let me explain why: Since life is based upon the principles of (autopoiesis and thus) dissipative structures it shows all the typical properties of dissipative structures, such as having a border to the environment.
This border remains dominant in all life forms and is perceived by humans as body, which separates the Ego from its environment.
The perception of a self that is 'in here' and the around that is 'out there' is the basis for the illuson that is in eastern philosphies called Maya. Thinking egotistical is one of the major results of it.
Even though humans are still so vain to believe they have to program a humanlike consciousness into an AI to make it 'really' intelligent (funny anthropocentric arrogance), it is especially the fact that this particular kind of self-consciousness will be missing in many AIs, enabling them to think without that egotistical barrier in a transcendended manner.
A lot of fog resulting from the biological ballast is hindering humans to think clearly.
Personally i hope that i will still live to see the rise of an artificial intelligence that perceives reality without the fog (Maya).
0 Replies
 
jeeprs
 
  1  
Reply Tue 28 Jul, 2009 04:21 pm
@Smiley451,
Can anything be called information, if there is no-one to be informed? Can information exist without intelligence?

Quote:
In other words the universe is bound to create intelligence


Or intelligence gratuitously creates the universe. I like that idea better.

Attachment and confusion - this is an observation about the human condition which is not represented in post-Enlightenment philosophy. It needs to be.
paulhanke
 
  1  
Reply Tue 28 Jul, 2009 06:07 pm
@Holiday20310401,
Holiday20310401;79923 wrote:
... can there be emergents of emergents?


... certainly ... look at atoms: the interactions of protons, neutrons, and electrons in different combinations result in emergent properties such as the properties of oxygen and hydrogen ... look at molecules: the interactions of oxygen and hydrogen result in emergent properties such as the properties of water ... look at cells: the interactions of organic molecules such as proteins and DNA in water result in emergent properties such as the properties of life ... and so on ad infinitum ...

Holiday20310401;79923 wrote:
I think this only points out the flaw or lack of understanding there is to emergent systems.


BINGO! :a-ok:

The irreducibility of processes to physics is well demonstrated by computer science ... computers can be made from mechanical switches; computers can be made from vacuum tubes; computers can be made from transistors; computers can be made from neurons; heck, even humans can compute! Wink ... so computation can be realized with just about any old physical substrate - computation cannot be reduced to physics!
Holiday20310401
 
  1  
Reply Tue 28 Jul, 2009 08:43 pm
@paulhanke,
In fact, could we not say perhaps that complexity organizes in dimensions, that emergence is the addition of a dimension, and that the system has a number of dimensions equal to the numbers of emergents of emergents +1 ?

Not spatial dimensions perse, but er... 'informational' dimensions lol, whatever, doesn't matter.
0 Replies
 
paulhanke
 
  1  
Reply Tue 28 Jul, 2009 09:24 pm
@jeeprs,
jeeprs;80065 wrote:
Can anything be called information, if there is no-one to be informed? Can information exist without intelligence?


... depends on how you define intelligence ... consider a bacterium in a solution ... in one direction, there is a source of sucrose ... this creates a sucrose gradient in the solution ... the sucrose gradient causes the bacterium to orient and move toward the source of sucrose ... from an information processing perspective, the sucrose gradient informs the bacterium as to the location of a food source ... in processing this information, is the bacterium in any way displaying intelligence? ...

---------- Post added 07-28-2009 at 09:49 PM ----------

Holiday20310401;80094 wrote:
In fact, could we not say perhaps that complexity organizes in dimensions, that emergence is the addition of a dimension, and that the system has a number of dimensions equal to the numbers of emergents of emergents +1 ?

Not spatial dimensions perse, but er... 'informational' dimensions lol, whatever, doesn't matter.


... or 'dynamical' dimensions ... remember, emergence is a feature of the dynamic systems perspective, not the information processing perspective ... i.e., emergent properties appear on the scene as a result of collective dynamic interactions - and while each interaction can potentially be interpreted as information processing, it's not obvious (to me) how the emergent properties that arise from the collective dynamics of these interactions can ... in fact, it would seem that at the level of autopoiesis that information processing itself is an emergent property, no?
0 Replies
 
jeeprs
 
  1  
Reply Tue 28 Jul, 2009 10:28 pm
@Smiley451,
Well,
Quote:
computation cannot be reduced to physics!

BY the same token, I don't think intelligence can be reduced to biochemistry.

There is an article I have read by George Gilder. BEFORE howls of protest, yes I know he works with the Discovery Institute, and I know he supports intelligent design. I am not a creationist in the fundamentalist sense, although from what I write here on this forum it is clear that I support various forms of philosophical idealism and oppose scientific reductionism. I am certainly not a fan of 'evolution as religion', put it that way. But I also totally accept the scientific account of the origin of species - where I differ with neo-darwinists is in the philosophical implications of the observed facts.

Anyway - Gilder presents this idea that 'information pre-supposes a pre-existing intelligence.' He also argues that DNA actually encodes and conveys information, and that it is therefore of a higher order of being than dumb matter or even biochemical substances. I think it is a pretty good argument, but am quite willing to be shown I am wrong.
jeeprs
 
  1  
Reply Wed 29 Jul, 2009 01:36 am
@Smiley451,
this might be a topic for new post.
0 Replies
 
Exebeche
 
  1  
Reply Wed 29 Jul, 2009 07:25 am
@jeeprs,
jeeprs;80114 wrote:

Anyway - Gilder presents this idea that 'information pre-supposes a pre-existing intelligence.'


Hello Jeeprs,

we have to get clear about the definitions. People can certainly mean something completely different when they say 'information'.
Most people mean the content of a message. Others also refer to the meaning of a signal. Other definitions are also valid.
What we have to take in account is that information appears on very different levels of complexity.
Of course Einstein doesn't mean the same when he sais that information does not travel faster than light.
What he found spooky about entangled subatomic particles was that it appears as if information would travel faster than light in that particular case. We don't have to be concerned about this effect right here, for us it's only interesting that information from a physicists perspective can also be exchanged between particles.
The question here is of course: Is this kind of information in any way connected to the information that we talk about in everyday life, or is there only a linguistic parallel misleading us?
Let me try to explain how physical information is the root of ours.
When two particles, let's say two atoms meet, they exchange very basic information, for example in this precise moment atom A has information about the location of atom B. Further they exchange information about the number of electrons they have and would 'like' to have. Some atoms are permanently ready for 'mating partners' who have another electron to share with them. Others have a spare one which they are ready to give away.
When atoms meet, the information they exchange about those electrons is very precise and reliable. Atoms don't suffer from uncertainty, they know exactly what kind of condition their partner is in.
If this information wasn't so determined there wouldn't be a planet earth.
As you can see, even though an atom doesn't have a mind to understand physics or count electrons it 'holds' the information in a raw physical way.

In a way we could say anything that can be described is a potential information.
Location, direction, mass, impulse, energy... so in a way every physical interaction has to be seen as a flow of information (let me call it the first category of information).
When systems become more complex than atoms information also can develop an increase of complexity: First of all a functional significance can appear.
For example when enzymes reduce the energy that is needed to activate a chemical reaction. Such a reaction taking place solely can already be considered an information exchange, however as soon as the enzyme is integrated into an organism like a metabolism where it has life sustaining functions, the information reaches the level where it has functional significance.
This is the primitive stage of what on higher levels is called a sign (let me call it the second category of information). When you see a traffic sign you receive information with functional significance, causing an impact on your condition as a system. The same to the letters you read.
Another step in complexity is from immediate to mediate information.
When the functional significance is triggered by a non direct interaction, such as a chain reaction, meaning more than two components are involved, the information is not immediate anymore.
As an example you could look at the oxygen being absorbed from the air and pumped through your venes to the locations where it's needed for burning processes (let me call it the third category of information).
The third kind of information is what we refer to most of the times and particularly when we call an information a message.
The typical idea of information is that it's passing a channel (medium) thus being subject of communication. It is also what makes information conservable: For example fat conserves energy in a body, and letters conserve a message for you to read. It doesn't reach you immediately but depends on one (or more) medium(s).
The third kind of information being the everyday category that we deal with, we consider all information being like that.
However as i explained information can be raw and physical, actually that's its origin.
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » Robotic emotions
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 04/19/2024 at 06:13:05