10
   

The Future of Artificial Intelligence

 
 
Fri 26 Jan, 2007 08:37 am
I would be interested in hearing people's opinions on this subject. Although my background is in the sciences, I made a professional transition into computer programming in the mid 1980s. It seems to me that artificial intelligence, as it now exists, is fairly primitive. I see no evidence that any man-made machine or program currently in existence is self-aware, or doing anything like thinking. Some can simulate conversation, some can play chess well, but it seems to be pretty much a parlor trick.

Here is the question. Do you believe that there is some possibility and that the chances are not completely negligible that a machine smarter than a human being will be built by human beings within the next....say 300 years? If this did happen, what might the consequences be? Is it possible that the master-slave relationship between men and machines could ever be inverted? Your thoughts, please.
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 10 • Views: 17,049 • Replies: 40
Topic Closed
No top replies

 
parados
 
  2  
Fri 26 Jan, 2007 02:06 pm
I recall reading somewhere that a computer would have to work similar to a brain in that it would need to find the best routes to solutions through trial and repetition. The repetition would then make certain routes more likely.

It does raise a lot of questions though. One of the basic drives in all living beings is the desire to survive. A need that is necessary in order for the species to continue. Would an artificial intelligence need that? It can be turned on and off and rebuilt. It doesn't need to reproduce to exist.
Brandon9000
 
  1  
Fri 26 Jan, 2007 03:08 pm
parados wrote:
I recall reading somewhere that a computer would have to work similar to a brain in that it would need to find the best routes to solutions through trial and repetition. The repetition would then make certain routes more likely.

It does raise a lot of questions though. One of the basic drives in all living beings is the desire to survive. A need that is necessary in order for the species to continue. Would an artificial intelligence need that? It can be turned on and off and rebuilt. It doesn't need to reproduce to exist.

Survival instinct is apparently the result of evolution. In order for a machine to have that, it would probably have to be designed in. There might be some desire to survive that arises naturally out of self-awarenes, but it would be unlikely to be as strong.
0 Replies
 
spendius
 
  1  
Fri 26 Jan, 2007 03:32 pm
There's a scene in The Tin Men by Micheal Frayn where a guy puts two computers in a boat to try to teach them ethics.

He has them programmed and then starts the boat sinking so that it won't take the both of them.

They both jump overboard.

But it's a long time since I read it. Thanks for the reminder Brandie. I'll read it again. He's a decent writer.

I don't remember any other scene in the book so that one must have struck me good.
0 Replies
 
rosborne979
 
  2  
Sat 17 Feb, 2007 02:23 pm
Re: The Future of Artificial Intelligence
Brandon9000 wrote:
I would be interested in hearing people's opinions on this subject. Although my background is in the sciences, I made a professional transition into computer programming in the mid 1980s. It seems to me that artificial intelligence, as it now exists, is fairly primitive. I see no evidence that any man-made machine or program currently in existence is self-aware, or doing anything like thinking. Some can simulate conversation, some can play chess well, but it seems to be pretty much a parlor trick.


I agree, no machine today is in any way 'intelligent' the way we are. Our present level of coding skill and speed of computation are only capable of imitating rudimentary aspects of intelligent behavior.

I started programming computers professionally back in the early 80's doing Macro-11 assembler on PDP11's. In the 1990's I switched to a company doing artificial intelligence work using LISP and a few other languages. However, those languages, even though they use different subroutines for solving problems are still just being broken down into machine language by static compilers and assemblers. There is no dynamic aspect to is which would be necessary as a first stage in intelligent behavior.

Brandon9000 wrote:
Here is the question. Do you believe that there is some possibility and that the chances are not completely negligible that a machine smarter than a human being will be built by human beings within the next....say 300 years?


Yes. I think we are less than 100 years away from dynamically self-coding systems based on genetic algorithms that will result in systems indistinguishable from human thought. I think that Genetic Programming will be the primary method for generating truely intelligent artificial systems.

Brandon9000 wrote:
If this did happen, what might the consequences be? Is it possible that the master-slave relationship between men and machines could ever be inverted? Your thoughts, please.


I think that the next stage of natural evolution (on this planet at least) is for humans to create artificial intelligence, and a new form of [mechanical] life. And yes, I think it's going to be incredibly dangerous, but also incredibly beneficial, and ultimately unavoidable. A bit like atomic energy.

Ultimately, I believe in the survival of humanity, probably in some form of shared symbiosis with our machines. With things like quantum supercomputing, nano machines, and bio-genetic manipulation I suspect that people 1000 years from now may be very different beings that we are today.
0 Replies
 
rosborne979
 
  1  
Sun 18 Feb, 2007 10:15 pm
Other people are thinking about information singularities as well: The Singularity Institute
0 Replies
 
Dedshaw
 
  1  
Sat 24 Feb, 2007 05:14 pm
chances are pretty good robots will never become intelllgient or aware of its surroundings all by itself or on accident like they do in movies. ex terminator, great movie btw:P, anyways though, computers can only do what they are taught pretty much. there was this commerical showing people who made these dog robots and designed them to play soccer, all by themselves, no human involvment whatsoever. it was pretty neat to watch. so if you can program a robot to play soccer, you could probably program a robot to free think and do things on its own, very likely it will be no walk through the grass though. robots like the dogs given objectives and telling them how to respond to certain situations would be alot easier. like a warbot, teach it when to take cover, shoot the enemy, differentiate from enemy and ally. or like the dogs , program them when to try to take the ball, make a goal, or block a shot. i guess it all depends what your defintion of artificial intelligence is. a robot that acts like a person, just walks around the house and does what it wants, we probably have a while. robots built for certain tasks, pretty much already there.
0 Replies
 
stuh505
 
  1  
Sun 25 Feb, 2007 07:08 pm
Quote:
Yes. I think we are less than 100 years away from dynamically self-coding systems based on genetic algorithms that will result in systems indistinguishable from human thought. I think that Genetic Programming will be the primary method for generating truely intelligent artificial systems.


Genetic programming would certainly be a useful asset in writing long-term robust software that needed to be able to adapt to varying situations, but there is no connection between genetic programming and thought/consciousness.

Genetic algorithms(a subset, I suppose, of genetic programming) are just a minimization technique. It has the same computational purposes as gradient descent, simulated annealing, mean field annealing, monte carlo simulations, and exhaustive/brute force search: find a set of vector values to minimize some heuristic function.

Your prediction of 100 yrs is just silly. The present field of AI has absolutely nothing to do with creating machines that can actually think. There has not been any progress made in that department ever. With no theories of where to start looking for the answer, no path to follow, there is no possible way to predict when (if ever) we reach that goal!
0 Replies
 
rosborne979
 
  1  
Sun 25 Feb, 2007 10:43 pm
stuh505 wrote:
Genetic programming would certainly be a useful asset in writing long-term robust software that needed to be able to adapt to varying situations, but there is no connection between genetic programming and thought/consciousness.


Except that natural genetics and evolutionary processes lead to thought/consciousness. And this is exactly the system being replicated.

stuh505 wrote:
Your prediction of 100 yrs is just silly.


So your silly prediction is better than my silly prediction?
0 Replies
 
stuh505
 
  1  
Sun 25 Feb, 2007 11:08 pm
Quote:
Except that natural genetics and evolutionary processes lead to thought/consciousness. And this is exactly the system being replicated.


It's true that this evolved through evolution. However, I thought your point was that genetic programming was part of the solution. Now, if you're just suggesting that genetic programming could be used as a method for figuring out what the solution is, and from that point it can be abandoned, well that makes a little more sense because we know that worked for evolution.

However there are some MAJOR problems with that idea. First it involves quantizing the problem as a set of variables and a fitness or heuristic function.

1) You would have to come up with a genetic code similar to DNA. This is no simple task, there is no such field in this area to date, it would like take many decades of research to come up with the principles necessary to ensure proper diversity. We can use some of our knowledge of DNA but there are still too many secrets.

2) The fact is that even our best minimization techniques do not work on extremely large dimensional problems with highly irregular state spaces! For this we would need to have DNA strings comparable in length to human DNA and that would make it a 600,000 dimensional problem. Completely out of the scope of what we can do currently.

3) It would take comparable time to run as it took for the traits to evolve in humans. Sure we can be more direct with a computer, but it'd still probably take hundreds of thousands of years to run that program.

Quote:
So your silly prediction is better than my silly prediction?


My "theories"? I didn't make an theories, I merely pointed out that it is not possible to make predictions on when we will discover something when nobody has got a clue where to start looking. It's like asking when will the cake be done baking when nobody knows what the word "cake" means!!
0 Replies
 
talk72000
 
  1  
Sun 25 Feb, 2007 11:35 pm
The Matrix Series is all about artificial intelligence and humans serving as energy with their bodies encased in a pods as their biochemical processes serve as batteries for the robotic universe to carry on.
0 Replies
 
rosborne979
 
  1  
Mon 26 Feb, 2007 06:47 am
talk72000 wrote:
The Matrix Series is all about artificial intelligence and humans serving as energy with their bodies encased in a pods as their biochemical processes serve as batteries for the robotic universe to carry on.

I think there would be far better bio-energy sources than pod-bound humans. I found that particular aspect of the story pretty weak. DuraCell probably paid a lot for them to find a plot point which allowed Fishburne to hold up a coppertop Smile

I would have preferred if the machines were using the neural cortex of the humans to augment their computational capacity. The human brain is relatively unique in the animal kingdom, so that at least would have seemed like a stronger plot point. Also they could have used it to create some type of symbiosis which might have made an interesting twist to the show. Oh well, good flick anyway Wink
0 Replies
 
rosborne979
 
  1  
Mon 26 Feb, 2007 06:59 am
stuh505 wrote:
However there are some MAJOR problems with that idea. First it involves quantizing the problem as a set of variables and a fitness or heuristic function.


Congratulations, you think like a machine.

I prefer to believe that the challenges will be overcome. And I left out a LOT of detail in my posts. You need to read the links and use a little imagination to see the implications of where things are going.

Anyway, Brandon asked for opinions. Those are my opinions.
0 Replies
 
stuh505
 
  1  
Mon 26 Feb, 2007 09:37 am
It has nothing to do with the way I think, it has to do with how a given algorithm or programming style works. If it doesn't involve quantizing the problem into a set of variables and a fitness function, then it is NOT a genetic algorithm, and you shouldn't have used the word if that's not what you meant.
0 Replies
 
rosborne979
 
  1  
Mon 26 Feb, 2007 04:57 pm
stuh505 wrote:
It has nothing to do with the way I think, it has to do with how a given algorithm or programming style works. If it doesn't involve quantizing the problem into a set of variables and a fitness function, then it is NOT a genetic algorithm, and you shouldn't have used the word if that's not what you meant.


I don't know what your problem is, but you have a very unpleasant attitude about something which I thought was supposed to be an interesting and entertaining discussion.

I'm just providing resources that I believe are rudimentary precursors to the type of coding techniques which will eventually lead to true artificial intelligence. And that was what Brandon asked for.

One of those resources happens to be the genetic-programming.org home page. I happen to find it an interesting resource, but if you don't, then don't read it. And if you don't think those guys know what genetic programming is, then take it up with them. Sheesh.
0 Replies
 
stuh505
 
  1  
Mon 26 Feb, 2007 05:05 pm
Sorry if I came off as having an attitude. I was just trying to state matter-of-factly the reasons why I disagree, there is no emotion.
0 Replies
 
ebrown p
 
  1  
Mon 26 Feb, 2007 05:17 pm
Genetic Programming does not always boil down to "a set of variables and a fitness function". This describes the simple way to do it which may be most appropriate given limitations in computing power.

But the genetic programming that interests me involves programs (which are basically lists of bytes) that can be mutated in any way (i.e. any number can be changed, or any byte can be added or deleted at any point in the program).

This would mean that any program, from Doom III, to a Linux kernel, to some sort of intelligence is theoretically possible (in the extreme case).

I don't think this is the same as "a set of variables".

A while ago I was working on a machine language with the interesting property that it had no errors-- i.e. any sequence of numbers was valid (although of course there were infinite loops and unreachable code). We wanted this language precisely so we could then do random mutations.

In this case, the interesting experiment is "fitness functions" meaning a way to kill off programs that don't meet certain requirements (i.e. solving a problem, or choosing the right item from a list).

But as pointed out, evolution in real life is based on "fitness functions", namely the ability to survive and reproduce.

Of course there are limitations to this, especially with current computing abilities. But for simple problems in limited "environments" there are interesting results.

But this is an area for the future-- letting your imagination go free for a bit is not only fun, it may lead somewhere.
0 Replies
 
stuh505
 
  1  
Mon 26 Feb, 2007 05:34 pm
ebrown, yes...I myself am very interested in such programs. I first had the idea to write that kind of program back in middle school. Even so, we're still talking about a set of variables and a fitness function.

Usually when we do genetic programming its within some predetermined space and we know what all the possible outcomes are.

In the ideal case, we would like to be able to perform evolution in a space where the forseable outcomes are not predictable. The main problem here as I pointed out is that instead of defining a template for our results to conform to, we need to start with a construction mechanism that is capable of producing anything.

This requires going to a very basic basic level and having fundamental forces etc, like representing virtual genes or atoms and having the organism be an extremely complex thing actually relying on self regulating networks and stuff. Now the problem becomes intractable.

The bigger problem is that we cannot just make up a set of virtual bases/genes that are more simple than in real life...if we want to evolve consciousness (and I think that is the point here), something that we have NO idea how it works, we cannot possibly come up with a simplified framework that is CAPABLE of producing consciousness...so we would have to use the real world framework.

The problem with that is, we don't know the real laws of physics. If it was the case that awareness (and real intelligence) merely resulting from some combination of energy/matter using basic particle-based physics then we COULD. Almost everything in the universe can be described in terms of the energy state with respect to time: everything having to do with electromagnetic radiation, physical motion, physical objects, nuclear fusion, you name it. But awareness has no physical interpretation. Yet we know that it does arise and is somehow connected to matter/energy because it uses energy as a fuel source to be maintined. Therefore, we are missing the very scientific laws that would be required for consciousness to evolve...and hence we cannot hope to evolve a virtual consciousness.
0 Replies
 
ebrown p
 
  1  
Mon 26 Feb, 2007 05:49 pm
I disagree (respectfully of course)...

I am arguing that a language that is "Turing complete" (meaning that it can be used to write a program for any possible computational task) can be used as the basis for a genetic process-- where a program in this language is "mutated".

By the very nature of a Turing complete language, it is possible to arrive at any combination of bytes. This widens the "predetermined space" to the set of all possible programs. What you are calling "atoms" (or genes) are now bytes (or opcodes and arguments).

Here is an important point (and I bold it as such)-- A set of Mutations on a Turing complete language, over a very large number of epochs, can lead to behavior that is complex and unexpected. In fact the set of "end-points" of this type of genetic process is not the set of "conceivable" programs... it is the set of "possible" programs.

I am arguing that the limiting factor in setting up this sort of open-ended, genetic process; where any result is possible is not technology or understanding-- it is computing power.

Now, of course, you throw the concept of conciousness into the mix. The question about whether consciousness can be reproduced on a Turing machine is an open question.

If I were to try, and had limitless time, and computing power... I don't think I would start with the real laws of physics. The virtual world inside a Turing machine has different rules than the physical world.

Instead of "genes" which are part of proteins and molecules that make up the chemistry of our world... the natural "gene" in the world of Turing Machines is the byte. I think I would start there.
0 Replies
 
spendius
 
  1  
Mon 26 Feb, 2007 06:10 pm
ebrown_p wrote-

Quote:
By the very nature of a Turing complete language, it is possible to arrive at any combination of bytes.


That is assuming it can go into a pub, order a pint, take a sip, tell the landlord it is gnat's piss, argue about it, get another one and turn to the wobbly and tell her you can get her into movies.

Otherwise it becomes extinct as soon as the rust sets in and that doesn't take long on an evolution time scale.
0 Replies
 
 

Related Topics

Evolution 101 - Discussion by gungasnake
Typing Equations on a PC - Discussion by Brandon9000
The well known Mind vs Brain. - Discussion by crayon851
Scientists Offer Proof of 'Dark Matter' - Discussion by oralloy
Blue Saturn - Discussion by oralloy
Bald Eagle-DDT Myth Still Flying High - Discussion by gungasnake
DDT: A Weapon of Mass Survival - Discussion by gungasnake
 
  1. Forums
  2. » The Future of Artificial Intelligence
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.41 seconds on 11/23/2024 at 10:19:32