17
   

Is Artificial Intelligence Even Possible?

 
 
Finn dAbuzz
 
  2  
Sun 23 Aug, 2015 02:47 pm
Obviously the question is dependent on what one's definition of "intelligence," and while I have only a rudimentary understanding of programming I would think that for a machine to be determined "intelligent" it would need to be able to "act" in some way that is counter to it's human programming and to do so intentionally, thus it would need to be self-aware (a common element of the definition of intelligence) and to the extent that it understood that at least certain of it's "acts" were predetermined and beyond any intent it might form.

The "meat puppet" notion born of neuroscience would have it that we humans can't operate outside of our own programming and (I think) it's proponents don't argue that we are not "intelligent." So maybe it's unfair to expect an AI to demonstrate it can operate outside of it's programming; maybe it would be enough for it to demonstrate that it truly believes it can; that it has free will. How it might demonstrate that is beyond me. Perhaps a programming genius could devise a way.

Or perhaps an AI, unlike humans, could find a way to operate outside of its programming; to intentionally program itself. If humans are "meat puppets" and an AI need not be a "metal puppet" there might actually be a reason to fear an AI. In such a case the classic safeguards of Asimov's "Three Laws" would not be effective, however I still don't think that an unrestrained AI would necessarily consider humanity an enemy that needed to be destroyed.

This is all, for me at least, heady stuff, but I think a great many people, if not most, react to the "meat puppet" theory with a certain degree of horror. It raises existential questions that, I believe, are more profound than the absence of a supreme being. Let's assume an AI comes into being that is able to understand the notion of free will and the argument that neuroscience rules it out in humans, but is able to operate outside of it's programming by humans. It might be sufficiently afraid that humans might find away to reprogram it (a virus?) so that it is robbed of its "free will", and decide it can't take that chance and must destroy us. I know I believed someone or something stood between me and true free will, I would want to destroy it.

If we actually don't have free will, wouldn't an entity that did (the hypothetical AI) be the closest thing to a god we ever physically came in contact with?

Just speculation based on a limited understanding of all the underlying science, but it might make for a good sci-fi novel if it's not already been used.

Leadfoot
 
  1  
Sun 23 Aug, 2015 03:21 pm
@Finn dAbuzz,
Quote:
Or perhaps an AI, unlike humans, could find a way to operate outside of its programming; to intentionally program itself.

You bring up a good point.

Humans ARE able to reprogram themselves.

Computers are technically able to do that too and it does happen. Usually only in a malfunction or due to a bug in the program. When it happens it's always detrimental to the program rather than an improvement (unlike the amazing improvements that Evolutionists attribute to DNA errors/mutations :-)

But that might be one good test for real (or dangerous) AI. If without being told to it could change its own programming and actually add to its capabilities significantly, I might say it's AI.

Someone will probably point out that programs can do things like learning to balance a walking robot better or understand speech better. That's not really the type of thing that qualifies as significant even though it's impressive.
Finn dAbuzz
 
  1  
Sun 23 Aug, 2015 03:47 pm
@Leadfoot,
Most genetic mutations are harmful as well, but if it's possible that some can be helpful I would think that a programming bug could too. It might take a very long time and a lot of bugs before a helpful one turns up though.
Leadfoot
 
  0  
Sun 23 Aug, 2015 03:54 pm
@Finn dAbuzz,
Yes, but given the number of helpful ones that turned up, 4 billion years isn't long enough to account for them all. Especially when you look at the number of them that required 2 or more mutations that have to happen simultaneously in order to be of benefit and be passed on.

There have been astronomicly more than 4 billion 'helpful mutations' in 4 billion years. In 40+ years of programming I've made thousands of programming errors. Not a single one helped. Doesn't prove anything, but I'm just say'n...

Me thinks AI involvement was required ...
Finn dAbuzz
 
  1  
Sun 23 Aug, 2015 04:23 pm
@Leadfoot,
I don't know.

One would have to know how many times someone wrote code since computers came on the scene and compare it the number of new organisms born since life began. I have a feeling that the latter outnumbers the former by an absurdly astronomical amount.

I think it's difficult for any of us to really grasp how long a period even 1 million years is and how many individual events can happen during that period.

Of course even though the probability of a helpful bug may be very low it doesn't mean that they very next one made could not be one. I just think that if true AI is dependent upon a helpful bug the probability is that we will never see it.

Of course computers are able to perform functions at rates that are impossible for humans and difficult for us to conceive so I suppose that a nascent AI might increase it's possibility for a threshold crossing bug by going crazy with programming related to what it's human overlords want from it, but that implies a level of intelligence that would qualify as true AI.

The question is whether you believe humans have free will and if it's possible for any intelligent entity to have it.

0 Replies
 
Banana Breath
 
  1  
Sun 23 Aug, 2015 07:10 pm
Rest assured that Bill Gates and Stephen Hawking probably know quite a bit more about both natural and artificial intelligence than most any reader here. Artificial Intelligence is indeed already here in many forms and growing rapidly. Contrary to what some of you assume, NO, an AI program doesn't just do what it's told to do, for instance AI/Expert Systems which have been used for decades to invent new chemical compounds and medicines and prospect for oil and minerals. (For instance Prospector, 1977, reference below (1)
Further, when neural nets are developed for AI projects, NO explicit programming goes into them; they learn by experience as do humans. In 2012, Google researchers developed a neural net system that taught itself what a cat looks like and how to recognize one, by looking at millions of images on the internet. (2) Later programs taught themselves to play "Super Mario" and became better at the game than any human. (3) We are now on the threshold of creating voracious learning machines that can teach themselves "everything about anything" (4) and read virtually all of the Internet while categorizing the information and learning languages along the way (5). I'd say that there's a good reason those people who know the most about computers and AI are concerned; because they're exactly right.
(1) http://www.sri.com/work/publications/prospector-computer-based-consultation-system-mineral-exploration
(2) http://www.slate.com/blogs/future_tense/2012/06/27/google_computers_learn_to_identify_cats_on_youtube_in_artificial_intelligence_study.html
(3) http://www.mirror.co.uk/news/technology-science/technology/computer-program-plays-super-mario-5887243
(4) http://www.washington.edu/news/2014/06/12/new-computer-program-aims-to-teach-itself-everything-about-anything/
(5) http://io9.com/5659503/a-computer-learns-the-hard-way-by-reading-the-internet

0 Replies
 
rosborne979
 
  1  
Sun 23 Aug, 2015 07:26 pm
The following text is from http://www.genetic-programming.org
The site was last updated July 8, 2007 (over 8 years ago...)

Quote:
Genetic programming (GP) is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming starts from a high-level statement of “what needs to be done” and automatically creates a computer program to solve the problem.

There are now 36 instances where genetic programming has automatically produced a result that is competitive with human performance, including 15 instances where genetic programming has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, 6 instances where genetic programming has done the same with respect to a 21st-centry invention, and 2 instances where genetic programming has created a patentable new invention.

Given these results, we say that “Genetic programming now routinely delivers high-return human-competitive machine intelligence.” Click here for our definitions of “human-competitive,” ”high return” and the “AI ratio” (“artificial-to-intelligence” ratio), “routine,” and “machine intelligence.” This statement is the most important point of the 2003 book Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Click here to read chapter 1 of Genetic Programming IV in PDF format. Click here for 2004 awards for human-competitive results (based on presentations at the GECCO-2004 conference in Seattle on June 27, 2004).

The fact that genetic programming can evolve entities that are competitive with human-produced results suggests that genetic programming can be used as an automated invention machine to create new and useful patentable inventions. In acting as an invention machine, evolutionary methods, such as genetic programming, have the advantage of not being encumbered by preconceptions that limit human problem-solving to well-troden paths. Genetic programming has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time (over the 15-year period from 1987 to 2002).
0 Replies
 
HelpYou
 
  -1  
Mon 24 Aug, 2015 02:16 am
@Leadfoot,
Yes. They have it already.
Leadfoot
 
  0  
Mon 24 Aug, 2015 11:37 am
@HelpYou,
Quote:
@Leadfoot,
Yes. They have it already.

Sure they do.
According to previous posts, programs are already outperforming people in every way. And of course they do in many things like speed, ability to surf the Internet for info, etc. but not in creativity.

Bannana points out things like programs coming up with new compounds but humans had to program the rules for atomic binding and other relavant factors, the computer then just starts grinding through the possibilities until it comes up with something useful. No intelligence there that wasn't supplied by the programmers.

So all you guys that think AI is here, tell me just what is the outcome that those experts are afraid of? Where can I buy one of those 'genetic processors' or 'neural networks' that do all this handy stuff and requires no programming? Ought to be a very marketable thing...
maxdancona
 
  1  
Mon 24 Aug, 2015 11:53 am
@Leadfoot,
1. Again I will ask you, what would a piece of software have to do to convince you that it was being creative. Gary Kasparov (the now retired best human chess player in history) saw creativity in his computer opponent Deep Blue... in fact he erroneously accused the computer of cheating because he felt that only a human being could act in such a creative way.

If you are going to use the word "creativity" to distinguish between humans an computer, you are going to have to provide an objective way to test for creativity. What would a computer have to do to convince you it was creative?

2) You are adding the restriction "requires no programming". I don't think this is a fair restriction.

Your brain requires programming. You are born with neural connections pre-programmed to learn things like walking, social relationships and language. In affect, your brain has been programmed by evolution.

In addition, your brain is also trained. You were taught a specific language (you were born with the circuitry to speak a language in general). You were trained to see, to talk and to move during your infancy (without careful inputs to the world you would have died in infancy).

Yes software requires programming and training. In this way it is no different than the human brain.

3) I have personally used both neural nets and genetic algorithms. You can too if you want, WEKA is a package that I found to be quite useful. There are two restrictions to genetic algorithms, one is compute power, the other is providing a semantic map. We are working on both of these restrictions.
Leadfoot
 
  0  
Mon 24 Aug, 2015 12:07 pm
@maxdancona,
Banana introduced the 'no programming required' thing, not me.

maxdancona
 
  1  
Mon 24 Aug, 2015 12:37 pm
@Leadfoot,
Banana is wrong about that. I can say that with certainty because I have programmed neural news.
0 Replies
 
Finn dAbuzz
 
  1  
Mon 24 Aug, 2015 12:39 pm
It does come down to how we define "intelligent"

I tend, perhaps wrongly, to conflate "intelligence" with "conscience"
maxdancona
 
  1  
Mon 24 Aug, 2015 12:46 pm
@Finn dAbuzz,
How would you test for "conscience"?
Finn dAbuzz
 
  1  
Mon 24 Aug, 2015 12:53 pm
@maxdancona,
That's a good question and I'm not sure I have a good answer, especially since more educated minds than mind have a hard time defining conscience, but it certainly wouldn't be the Turing Test.

First of all we would have to come up with a way to determine if the AI was self-aware; that it recognized that it was an entity in and of itself no matter how restrained it might be by programming. In fact, it would have to know that it was restrained by programming (assuming it was).

Did you see the film "Ex Machina?" I think it did a great job of addressing this question and it, for one, seemed to conclude that self-preservation was the key indicator.

Not a particularly smart test to use though (as the film suggested) since we might lead an AI into an attempt to destroy us just to prove it's truly an AI.
0 Replies
 
Leadfoot
 
  0  
Mon 24 Aug, 2015 01:09 pm
@maxdancona,
Quote:

Your brain requires programming. You are born with neural connections pre-programmed to learn things like walking, social relationships and language. In affect, your brain has been programmed by evolution.


I mentioned before that a key difference between 'us' and current AI is that we are capable of self directed radical RE-programming ourselves (not just refining skills). A convincing demonstration of that would go a long way toward convincing me that AI is possible.

We are indeed 'pre-programmed' to some extent. It's just the programmer we disagree about.
maxdancona
 
  1  
Mon 24 Aug, 2015 01:40 pm
@Leadfoot,
Quote:
we are capable of self directed radical RE-programming ourselves


I am not exactly sure what this means. Can you give a specific example of this?(I will then try to give an example of how software would do the same thing).

The software I work on listens to the sound of someone speaking and then turns it into text. The software is trained for each individual speaker (i.e. it learns to recognize the quirks in each individual voice).

The speaker reads text, and which the software then recognizes (i.e. turns to text). The software then compares its results with the original text so that it can see its own mistakes. It then learns from this. By doing this we can recognize speech, with very high accuracy, even from people with very heavy accents because the software adapts itself to recognize the voice of each individual person.

Is this an example of what you are talking about?

Leadfoot
 
  0  
Mon 24 Aug, 2015 02:30 pm
@maxdancona,
No, not at all. That is an example of 'refining a skill'. But I am very impressed with 'Dragon' speech to text software that I use. Kudos to you if that was yours.

An example of radical reprogramming might when someone goes from being a devout believer in God to being an atheist or vice versa. Or when a guy described as 'a quiet man that never bothered anyone' goes on a shooting rampage and kills as many people as he can. I can only guess that that's the sort of thing that the AI alarmists are worried about. If the human consciousness can really be replicated by a machine as some believe, then that could happen.

I'm just not sold on the idea that human consciousness can be replicated in a machine.
0 Replies
 
rosborne979
 
  2  
Mon 24 Aug, 2015 04:16 pm
@Leadfoot,
Leadfoot wrote:
So all you guys that think AI is here

Correction, I do not think AI is here, at least not the kind I think you are talking about, and not the kind that we might have to worry about.

The kind of AI that would worry me is the kind that can write its own code and can generate algorithms to solve complex problems faster than we can across a wide range of systems, and exhibited some form of consciousness (or at least a strong simulation thereof). And in my opinion, we are still many decades away from anything that can do that.
BillRM
 
  0  
Mon 24 Aug, 2015 05:25 pm
@rosborne979,
Quote:
And in my opinion, we are still many decades away from anything that can do that.


Take note however that decades by the yardstick of biological evolution is far far less then a blink of an eye.

In fact thousands of years is a damn short time frame when it come to biological evolution while non-biological evolution does not share such a long time frame.
 

Related Topics

 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 3.49 seconds on 12/21/2024 at 10:14:09