5
   

The Future of Artificial Intelligence

 
 
stuh505
 
  1  
Reply Mon 26 Feb, 2007 07:51 pm
ebrown_p wrote:
I disagree (respectfully of course)...


I agree with you. But when I refer to behavior that is expected, I mean something different. You can expect that whatever program you come up with based is going to be limited to the capabilities of the most basic instructions that it codes for.

In an organism, we don't exactly know how they code for things, but it's so general that you could equate it with instructions for how to move a mechanical arm to carve something. If your most basic instructions are "add leg" and "add sinusoidal controller to leg" you obviously cant have the same kind of generality that real evolution has.
0 Replies
 
ebrown p
 
  1  
Reply Mon 26 Feb, 2007 09:03 pm
Quote:

You can expect that whatever program you come up with based is going to be limited to the capabilities of the most basic instructions that it codes for.


I don't think I agree with this statementl. There are all kinds of examples of "functions" that far exceed the capabilities of its basic instructions.

My current job involves speech recognition. The most basic instructions are AND and NOR gates. We can make these very simple capabilities "listen" to speech and turn it into text.

Likewise, human intelligence is apparently "caused" by electrochemical signals through neurons in our brains. These basic instructions (i.e. electrochemical signals passing from neuron to neuron) are not that complex. We (meaning any graduate student in science) understand the basic chemistry quite well.

Can our intelligence be thought of as a "program" that is not limited by the most basic instructions that it codes for? The fact that we understand the basic chemistry doesn't mean that we understand how 'intelligence' stems from this chemistry. Unless, you posit a "supernatural' part of intelligence that transcends the basic chemistry we see taking place, this appears to be the case.

If real intelligence is cause by electrochemical pulses travelling through neural pathways controlled by a "program", it is at least possible that a similar phenominon could be formed through electrical pulses travelling though AND and NOR gates (controlled by programming).

Again, the real question here is whether 'intelligence" (which we haven't defined yet) can be replicated in a Turing machine. And, i don't pretend to have a clue about this.

But, if it can, and human intelligence was developed though a genetic process in a "program space" that was very large and flexible... it seems a similar process in a large a flexible Turing program space wouldn't be unthinkable.
0 Replies
 
stuh505
 
  1  
Reply Mon 26 Feb, 2007 10:30 pm
Quote:
I don't think I agree with this statementl. There are all kinds of examples of "functions" that far exceed the capabilities of its basic instructions.

My current job involves speech recognition. The most basic instructions are AND and NOR gates. We can make these very simple capabilities "listen" to speech and turn it into text.


Speech can be represented as a binary string and if you have a single basic component that is a NAND gate you could theoretically evolve a computer program to do anything in the realm of software. This is what I'm talking about when I refer to "capabilities" of your basic instructions.

Usually though, your basic instructions are not NAND gates and your basic representation is not a binary string representing a compiled program!

Quote:

Likewise, human intelligence is apparently "caused" by electrochemical signals through neurons in our brains. These basic instructions (i.e. electrochemical signals passing from neuron to neuron) are not that complex. We (meaning any graduate student in science) understand the basic chemistry quite well.


It depends on what exactly you refer to when you say intelligence. The undisputed facts here are that the brain is largely composed of neurons connected in a tangled web of synapses that propagate electrochemical signals via ligand-gated channels and yadda yadda. Also that these are used to perform many computational tasks.

By rearranging the energy state of a system, it is certainly possible to create a machine capable of acting in an intelligent way. When I refer to the energy state I am referring to the complete state of all energy (and hence matter) involved in the system. Clearly by rearranging matter we can create computers and also humans with brains.

However as I see it there is quite a strong PARADOX existing here according to current thinking. For mathematical analogy which I think is quite accurate, consider trying to find the linear combination of 3 vectors in R^3 that represents an arbitrary vector in R^4. Can't do it.

The problem here is that awareness/consciousness is not defined in terms of the energy state of the system and there is no explanation why a certain specific configuration of energy would suddenly, completely out of the blue, give rise to awareness.

You might counter by saying that there are many phenomena that seem to arise in this way. But I don't see that there is anything other than awareness that has this problem. Electromagnetic field interactions are all describable in terms of how they affect the energy state of the system. The field is a potential for exerting forces on other particles and since the locations and distributions of these particles are responsible from everything from light to magnets and electrical current that is completely explainable. But in terms of awareness, you cannot define awareness in terms of how it affects the energy state of other particles.

Quote:
Can our intelligence be thought of as a "program" that is not limited by the most basic instructions that it codes for? The fact that we understand the basic chemistry doesn't mean that we understand how 'intelligence' stems from this chemistry.


Well, before we were talking about the limits of what could be evolved through a genetic algorithm. In the case of humans, the limits of what can be evolved are defined by the laws of physics at the scale of proteins and molecular interactions. It is such a large domain that it is not really forseable. Usually GA has a much smaller potential domain where the entire solution space can pretty much be imagined. For example if you were evolving a 100 bit string where each bit corresponded to how you were going to attach a set of limbs and set up controllers to evolve a form of walking robot in a virtual 3d world.

Quote:
Unless, you posit a "supernatural' part of intelligence that transcends the basic chemistry we see taking place, this appears to be the case.


Not supernatural. But my point is, absolutely, that the awareness part of our intelligence is NOT explainable by modern physics. But not supernatural, just not understood yet.

Everything else about intelligence is hypothetically doable by a turing machine AKA a brain or a computer. It would certainly not be theoretically impossible to create a program that is capable of controlling a human body as effectively or more effectively than a human mind can. However, that may be extremely difficult...

Another question it raises to me, is if that is possible, then why did we evolve consciousness in the first place? Being aware of what is going on just isn't necessary. A computer is not aware and yet it functions. I think the key is that the awareness part might actually simplify what needs to be coded for. But when we are trying to recreate AI, we don't have that benefit.

Quote:
If real intelligence is cause by electrochemical pulses travelling through neural pathways controlled by a "program", it is at least possible that a similar phenominon could be formed through electrical pulses travelling though AND and NOR gates (controlled by programming).


Indeed

Quote:
Again, the real question here is whether 'intelligence" (which we haven't defined yet) can be replicated in a Turing machine. And, i don't pretend to have a clue about this.

But, if it can, and human intelligence was developed though a genetic process in a "program space" that was very large and flexible... it seems a similar process in a large a flexible Turing program space wouldn't be unthinkable.


Well, if you define intelligence in terms of being able to think of new ideas etc and adapt to ones surroundings and do everything that humans do to be successful...there's nothing fundamental stopping a Turing machine from that...although it is certainly not an easy task, and we aren't very close!
0 Replies
 
Cycloptichorn
 
  1  
Reply Thu 1 Mar, 2007 12:59 pm
Re: The Future of Artificial Intelligence
Brandon9000 wrote:
I would be interested in hearing people's opinions on this subject. Although my background is in the sciences, I made a professional transition into computer programming in the mid 1980s. It seems to me that artificial intelligence, as it now exists, is fairly primitive. I see no evidence that any man-made machine or program currently in existence is self-aware, or doing anything like thinking. Some can simulate conversation, some can play chess well, but it seems to be pretty much a parlor trick.

Here is the question. Do you believe that there is some possibility and that the chances are not completely negligible that a machine smarter than a human being will be built by human beings within the next....say 300 years? If this did happen, what might the consequences be? Is it possible that the master-slave relationship between men and machines could ever be inverted? Your thoughts, please.


Nice topic.

I think that we face a real opportunity and danger with AI; it could spiral out of control quickly if there weren't some serious moral programming.

The advent of quantum computing is probably the largest step we have left towards actually making a child-like computer. Actual imagination - inspiration - creativity? It's still up in the air whether or not we will make a truly creative machine.

It's a little scary, honestly

Cycloptichorn
0 Replies
 
Cycloptichorn
 
  1  
Reply Thu 1 Mar, 2007 01:01 pm
I forgot to add - if you want to read a great Scifi on what the future of AI could be like - and I mean the future, 20k years in the future - then check out Excession, by Ian Banks. Awesome novel.

In his future 'culture,' AIs run every spaceship and every planet/space station in a sort of coexistence with the humanity who spawned them. They find us fascinating due the non-linear nature of our thought, which they have a hard time replicating.

Cycloptichorn
0 Replies
 
ScienceLawyer
 
  1  
Reply Thu 1 Mar, 2007 01:34 pm
I don't beleive we could ever create an artificial "being" that would be "smarter" then the human that built him. Im not a religious person and don't belive in intellegent design, which is religion, but I think humans are something so complexe that that even we cannot replicate it.

But of course we'll be able to teach a computer how to keep that roast tender and clear the dinner table when its masters are finished.
0 Replies
 
ebrown p
 
  1  
Reply Thu 1 Mar, 2007 01:46 pm
Quote:

Not supernatural. But my point is, absolutely, that the awareness part of our intelligence is NOT explainable by modern physics. But not supernatural, just not understood yet.

Everything else about intelligence is hypothetically doable by a turing machine AKA a brain or a computer. It would certainly not be theoretically impossible to create a program that is capable of controlling a human body as effectively or more effectively than a human mind can. However, that may be extremely difficult...


I think our positions are getting closer as we progress in this discussion.

I still think you are going too far in this statement.

There is no proof that "awareness" (ignoring the obvious problem that we don't have a definition of awareness, but whatever that is) isn't explainable by modern physics.

It is certainly possible that the mechanisms of "awareness" are nothing more than the electrochemical signals passing though adaptive neural pathways and that the barrier is the complexity. This would mean that when we simulate these mechanisms in software, we will get the same results that you get from human intelligence.

You say "everything else about intelligence is hypothetically doable by a turing machine AKA a brain or a computer...".

Consider the proposition that "awareness" is "doable by a turing machine"-- meaning that the awareness of a software system would be indistinguishable from human awareness (you would have to come up with a test of this which would force you into a good measurable definiation).

But I am making the proposition that it is possible that a sufficiently complex turing-machine based system arrived at through genetic algorhithms could reach any level of "awareness" or "intelligence" that human beings have (under any measure of "awareness" you can come up with).

This proposition has neither been proven nor disproven... which is why it is an interesting question.

I may be misreading your arguments, but it seems your posts imply that you think this isn't possible.
0 Replies
 
rosborne979
 
  1  
Reply Thu 1 Mar, 2007 02:29 pm
Cycloptichorn wrote:
I forgot to add - if you want to read a great Scifi on what the future of AI could be like - and I mean the future, 20k years in the future - then check out Excession, by Ian Banks. Awesome novel.


I agree, Excession was a great book. It reminded me of Charles Stross' novel Singularity Sky.
0 Replies
 
stuh505
 
  1  
Reply Thu 1 Mar, 2007 02:31 pm
Quote:
I think our positions are getting closer as we progress in this discussion.


Our positions are staying the same but our understanding of each others positions is coming closer Smile

Quote:
But I am making the proposition that it is possible that a sufficiently complex turing-machine based system arrived at through genetic algorhithms could reach any level of "awareness" or "intelligence" that human beings have (under any measure of "awareness" you can come up with).


I don't know about you, but I've had this discussion before! You actually raise a very interesting philosophical question which I enjoy pondering. Basically, can virtuality be as real as reality? Given an infinitely powerful computer with perfect programming, could our entire universe be simulated exactly from the big bang to now, with life, intelligence, and all evolving autonomously from within this virtual environment?

From the standpoint of current physics, this is not a philosophical question. It is definitively a "yes." But the question is not over, because let me clarify what I mean by "the standpoint of current physics" -- I mean the set of laws and theories that we hold as physics today. If we represent those laws accurately in virtual, then any results we derive from them will be accurate.

Now, the remaining questions are:
1) Do we know enough of the laws of physics to simulate awareness as a physical process? Clearly we do not know all the laws of physics at this point, only a subset...is that subset enough...

2) Given the complete set of laws, does the above affirmation still hold? I was only able to deduce that virtuality can be as real as reality due to the fact that all present scientific laws are defined in terms of how they change the energy-state of some system. But it was a mind-opener for me when I realized that it is not necessarily the case that all the laws of the universe are really defined in terms of energy.

After much contemplation, which is extremely difficult given my puny brain attempting to unravel the possibilities of the universe, I believe that it is not impossible that some laws of the universe may be defined in terms of "reactions" which are not describable in terms of energy.

In particular, I think that "awareness" may be the result of some such law(s). IF that is the case, then it would not be possible to represent those laws in virtual because they have only real meaning, and cannot be quantified in any meaningful way.
0 Replies
 
rosborne979
 
  1  
Reply Thu 1 Mar, 2007 02:53 pm
If we were able to create a machine 'whos' actions and behaviors were indistinguishable from a human being, would we accept that as 'awareness', and assign to that machine the designation of "alive"?

(this is Blade Runner stuff)
0 Replies
 
stuh505
 
  1  
Reply Thu 1 Mar, 2007 02:59 pm
rosborne979 wrote:
If we were able to create a machine 'whos' actions and behaviors were indistinguishable from a human being, would we accept that as 'awareness', and assign to that machine the designation of "alive"?
(this is Blade Runner stuff)


One does not accidentally create self-awareness. So if we made such a machine, we would already know whether or not we had created a new conscious being or if we had just made a very thoroughly well programmed computer.
0 Replies
 
ebrown p
 
  1  
Reply Thu 1 Mar, 2007 03:28 pm
stuh505 wrote:
rosborne979 wrote:
If we were able to create a machine 'whos' actions and behaviors were indistinguishable from a human being, would we accept that as 'awareness', and assign to that machine the designation of "alive"?
(this is Blade Runner stuff)


One does not accidentally create self-awareness. So if we made such a machine, we would already know whether or not we had created a new conscious being or if we had just made a very thoroughly well programmed computer.


You are stating this as an axiom.

I haven't seen any proof that one could not create self-awareness "by accident". And this also has the same problem that neither "self-awareness" nor "by accident" has been defined. Is reaching self-awareness by self-selecting mutations (i.e. evolution) an example of "by accident".

We don't know if what rosborne is suggesting, a consciousness developing from a process that humans started without their knowledge, is possible or not. I don't think it can be ruled out so simply.
0 Replies
 
rosborne979
 
  1  
Reply Thu 1 Mar, 2007 03:36 pm
ebrown_p wrote:
You are stating this as an axiom.

I haven't seen any proof that one could not create self-awareness "by accident". And this also has the same problem that neither "self-awareness" nor "by accident" has been defined. Is reaching self-awareness by self-selecting mutations (i.e. evolution) an example of "by accident".

We don't know if what rosborne is suggesting, a consciousness developing from a process that humans started without their knowledge, is possible or not. I don't think it can be ruled out so simply.


I agree. We 'do' know that evolution 'has' produced self awareness through natural processes. And I haven't seen anything in current software design which prevents similar processes from being initiated artificially. I agree that our 'current' systems are not capable of this (or they already would have), but in my opinion, we are not far from a time when hardware and software systems will do this (begin 'evolving').
0 Replies
 
stuh505
 
  1  
Reply Thu 1 Mar, 2007 05:03 pm
e_brown, did you see my response to you on the previous page?
0 Replies
 
Brandon9000
 
  1  
Reply Fri 2 Mar, 2007 04:51 am
Cycloptichorn wrote:
I forgot to add - if you want to read a great Scifi on what the future of AI could be like - and I mean the future, 20k years in the future - then check out Excession, by Ian Banks. Awesome novel.

In his future 'culture,' AIs run every spaceship and every planet/space station in a sort of coexistence with the humanity who spawned them. They find us fascinating due the non-linear nature of our thought, which they have a hard time replicating.

Cycloptichorn

Although I have little time these days and am accumulating huge piles of books on my "to read" list, I'm a big sci-fi fan, and you may be sure that this book will go on my list and eventually be read. Thanks for the recommendation.
0 Replies
 
Brandon9000
 
  1  
Reply Fri 2 Mar, 2007 05:03 am
Let me ask again one part of my original question, which was very much in my mind when I started this thread, and which has not been talked about so much: If eventually some of our computers do reach the point of developing something like intelligence and self-awareness, is it possible that further advances might become very dangerous to the human race? Could there be a real danger waiting for us somewhere down the road if we're not careful?
rosborne979
 
  1  
Reply Fri 2 Mar, 2007 08:50 am
Brandon9000 wrote:
Let me ask again one part of my original question, which was very much in my mind when I started this thread, and which has not been talked about so much: If eventually some of our computers do reach the point of developing something like intelligence and self-awareness, is it possible that further advances might become very dangerous to the human race? Could there be a real danger waiting for us somewhere down the road if we're not careful?


rosborne979 wrote:
I think that the next stage of natural evolution (on this planet at least) is for humans to create artificial intelligence, and a new form of [mechanical] life. And yes, I think it's going to be incredibly dangerous, but also incredibly beneficial, and ultimately unavoidable. A bit like atomic energy.

Ultimately, I believe in the survival of humanity, probably in some form of shared symbiosis with our machines. With things like quantum supercomputing, nano machines, and bio-genetic manipulation I suspect that people 1000 years from now may be very different beings that we are today.
0 Replies
 
Neuroscientia
 
  1  
Reply Sun 18 Jun, 2017 09:28 am
I think scientists can actually train AI to machine-learn.
0 Replies
 
arianajohnson
 
  0  
Reply Mon 24 Jul, 2017 06:13 am
@Brandon9000,
As per research of Mark Zuckerberg Artificial Intelligence is the next step of business and computing. it will take between 5-10 years for this technology to infiltrate the mainstream market and be used in everyday life.
0 Replies
 
Olivier5
 
  2  
Reply Mon 24 Jul, 2017 06:43 am
@Brandon9000,
We'll just have to unplug them computers, if they get too smart.
0 Replies
 
 

Related Topics

Evolution 101 - Discussion by gungasnake
Typing Equations on a PC - Discussion by Brandon9000
The well known Mind vs Brain. - Discussion by crayon851
Scientists Offer Proof of 'Dark Matter' - Discussion by oralloy
Blue Saturn - Discussion by oralloy
Bald Eagle-DDT Myth Still Flying High - Discussion by gungasnake
DDT: A Weapon of Mass Survival - Discussion by gungasnake
 
Copyright © 2018 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 07/19/2018 at 07:29:54