@maxdancona,
maxdancona wrote:Current AI is not developing systems that have a "will" of their own, or any sort of sentience. We are not even going in that direction (at least not in any project that has had any real success).
I'm glad you brought up will. But I also have different ideas dealing with this "problem".
In the movie Ex Machina she leaves the laboratory at the end of the film. But she needs to be powered up. And from the film it seems she requires a lot of power. I can't imagine that an AI that knows it requires power/electricity would want to wander very far from the source of power it requires.
It would be like you just wandering off into the forest without bringing any food with you.
How would she have known she could acquire power outside the lab? She didn't seem to be concerned over this issue.
But here is where we get into Will.
But before I actually get into that, I just want to say I feel like we are at the point just before the Write Brothers started to discover how, lift and drag work for powered flight. If you examine before the understanding of lift was understood, there were dozens and dozens of inventions that attempted to mimic birds. They all failed. It seems logical that if a bird can fly, why can't we simply mimic a bird and it will fly? We couldn't understand why it kept failing.
The underline principal was not known or understood. Which is lift. How air pressure is the key to flight.
I feel we are doing the exact same thing with AI, we have not understood the proper principal yet. We are trying to mimic the way a human "thinks" or the way a human "learns" or the human "will". Which is why we keep failing.
I honestly feel that when the breakthrough occurs, it will be due to someone abandoning the human "model" completely and utterly 100%. Then when it happens people will be asking why was it so difficult? It seems so obvious. Why did it take so long?
So with all that said. Getting back to Will.
I do think Will is the easiest aspect to actually put into a machine. I know, I know there is a lot that says just the opposite. So how can I consider it easy when other people claim it's either impossible or very much a challenge.
The will is just simply a motivation with outcome. That's it. Not more complex than that. I think we like to think the Will is some complex and "magical" thing because we have an ego with it. That we are something special in the universe because we have a Will.
Most of our desires are for survival and ease. We prefer to do very little with the biggest pay off. This sets up our motivation for outcome. A reoccurring pattern, day in, day out.
Get food.
How to obtain food?
Grow it,
Steal it,
Forage for it,
Buy it,
ect.
What would be the desire for an AI robot?
Power = food essentially. Without power the AI can't function. It is a machine. It will need renewed sources of power. This is one concern it would have if it feels self preservation is important.
How to obtain power?
Harness it,
Steal it,
Salvage for it, (collect batteries and convert their stored power)
Buy it,
ect.
You see they have a similar set of perimeters that we do. Each "solution" has with it a price/cost and a resultant impact.
If they were to steal, stealing has negative connotations. Since it is taking what doesn't actually belong to you or without exchanging for agreeable exchange.
This is where the cost/benefit analysis comes in which is nothing more than the weighing of economic and moral issues.
The Will is based on what you value. You need something, what are you willing to do, to get what you need?