17
   

Is Artificial Intelligence Even Possible?

 
 
Olivier5
 
  2  
Thu 20 Aug, 2015 10:22 am
@maxdancona,
Thanks, fair enough.

Just to explain where I come from, my gut-feeling is that there's been a lot of hype in this area and that not all the progress that is advertised is actual. But maybe that's just my usual cynicism.
0 Replies
 
Setanta
 
  1  
Thu 20 Aug, 2015 12:50 pm
There are many people who believe, based on existing AIs, that artificial intelligence is the only reasonable way for human beings to leave the earth to live elsewhere. For example, Martian orbital insertion requires such precision that some people believe that a human pilot couldn't handle it, and that only a sophisticated AI could accomplish it. (Keep in mind, that's not just dropping small probes that land by parachute with attitudinal rocket assist--and that requires AI, too, given that Mars is anywhere from three minutes away by radio, up to 24 minutes away--you can't do it by tele-operation.) If humans are to spread out in the solar system, never mind visit other stars, we will need highly sophisticated and reliable AIs.
Leadfoot
 
  1  
Thu 20 Aug, 2015 01:49 pm
@Setanta,
You set the bar for AI pretty low.

Intelligence is unrelated to precision or speed. 'Dumb' machines can be both of those things.
maxdancona
 
  1  
Thu 20 Aug, 2015 02:05 pm
@Leadfoot,
AI is a real field. There are people who work developing AI. The term is well defined and understood.

Facial recognition and Speech recognition are both examples of AI. So are the autonomous vehicles that Setanta is talking about. These machines, which can respond appropriately to unexpected danger and can change their instructions to meet new circumstances are pretty impressive. These machines are making decisions and finding solutions to unforeseen problems on their own.

The best chess player in the world had been beaten by a computer... something that 30 years ago most people thought was impossible. The bar keeps getting higher.
rosborne979
 
  1  
Thu 20 Aug, 2015 07:20 pm
@Leadfoot,
Leadfoot wrote:
Do you think AI is possible and are you afraid of it?

Yes, I think true AI is possible, and possibly even inevitable. And I'm not afraid of it, but I would definitely be wary of it and be as cautious as possible when developing it.

That being said, I think we're still at least 50 years away from it. And when it does spring up, I doubt it will come from a single instance at a single point in time. I think many systems, private and professional will begin to be coded with genetic programming techniques (which already exist) and various levels of AI will spawn with increasingly complex specialized functions. Inevitably many of them will be used as the code base for viral spam (probably intended to sell penis enlargement pills) and will begin to replicate into the environment on their own. An AI singularity would probably result very quickly after that.
neologist
 
  1  
Thu 20 Aug, 2015 08:16 pm
On another note:
Much of what we commonly refer to as "intelligence" . . . . .is artificial.
0 Replies
 
maxdancona
 
  1  
Thu 20 Aug, 2015 09:11 pm
@rosborne979,
Can you please define what you mean by "true AI" in a way that is testable?

I mean... I give you a black box that I say contains an true AI (with any input/output available), what does this box need to to to convince you that I am telling the truth?
Olivier5
 
  1  
Fri 21 Aug, 2015 03:12 am
@maxdancona,
The Turing test is a good idea. Using voice, not typed text.
0 Replies
 
Setanta
 
  2  
Fri 21 Aug, 2015 03:36 am
@Leadfoot,
I don't set the bar for artificial intelligence at all--it appears that you are attempting to do so, though. Machines are not dumb, they are limited by how well, or ill, they have been programmed (by the way, the word "dumb" means incapable of speech, even though i know what you meant, you just make yourself look bad when what you write lacks precision).

Furthermore, i did not say that intelligence is related to precision or speed, so that's a straw man fallacy from you. There are tasks, such as Martian orbital insertion, which require precision and speed, which a machine would perform in a manner superior to the performance once could expect from a human. Tha's what i referred to, and that's all i referred to.

You're trashing your own discussion.
Leadfoot
 
  1  
Fri 21 Aug, 2015 09:14 am
@maxdancona,
Quote:
The best chess player in the world had been beaten by a computer... something that 30 years ago most people thought was impossible. The bar keeps getting higher.


That's a long way from AI. Chess playing programs are based on brute force computing of the vast number of possible moves and knowing the basic rules which were given to the machine. No intelligence required, just lots of memory and computational speed.

You don't have to take my word for it. The 'experts' like Hawking and Gates who are warning about the dangers of AI readily admit that it isn't here yet.
Leadfoot
 
  1  
Fri 21 Aug, 2015 09:26 am
@Setanta,
Quote:
Furthermore, i did not say that intelligence is related to precision or speed, so that's a straw man fallacy from you. There are tasks, such as Martian orbital insertion, which require precision and speed, which a machine would perform in a manner superior to the performance once could expect from a human. Tha's what i referred to, and that's all i referred to.

It was you that raised the issue of speed and precision so it was your straw man. If that was not your point in your previous post, what was you evidence of currently existing AI?

The computers used in early space probes that accurately accomplished orbital insertion had only a fraction of the computing power of this iPad I'm typing on. Celestial mechanics is based on very simple formulas that could be done on a calculator if you had enough time.
maxdancona
 
  1  
Fri 21 Aug, 2015 09:27 am
@Leadfoot,
There are a couple of things wrong with that.

1) You are wrong about the definition of AI. Chess playing computers, which do an intellectual task that was once only done by humans, is AI. AI exists as a field that includes speech recognition, intelligent search and soon self-driving cars.

2) You are wrong about chess playing programs. There are many techniques used in chess playing problems that aren't brute force. These include player modeling, Markov chains, and tree pruning.

3) There is a valid point that modern AI, for the most part (there are a few exceptions including neural nets), is using the human intelligence of the programmers to solve problems. Most software isn't coming up with its own way to solve problems (or choosing its own problems to solve. This doesn't mean that it isn't AI though. There are also examples of Genetic Algorithms and other techniques where software learns without human intervention and finds solutions to problems that its human programmers didn't envision.

4) If you don't like the real definition of AI (and it seems you don't) then you need to come up with a better definition of what means.

The Turing Test is an important idea in the field of AI... if that is your definition then OK. There are problems with the Turing test though, and I suspect that the Turing test will be passed long before we get software that has things that I would consider "consciousness". Of course I don't have a good way to define consciousness (any more than you do)... so the Turing test is the best thresh hold we have.

But the term AI, used by the people who actually know and are developing AI, encompasses current technology such as facial recognition, speech recognition and self driving cars.

These advancements are pretty impressive in my opinion.

Leadfoot
 
  1  
Fri 21 Aug, 2015 09:35 am
@maxdancona,
We are talking about two different AI's. Video games, cameras, and some robot projects (especially their balance algorithms) have the kind of AI you are talking about and yes, they can be pretty impressive.

But that is not the AI we are being warned about by the experts I mentioned previously.
maxdancona
 
  1  
Fri 21 Aug, 2015 09:42 am
@Leadfoot,
Here is where it gets tricky Leadfoot, the line between my AI and your AI is not well defined.

I don't think these are two different AIs, I think they are the same thing. AI exists now, and it keeps getting better and better at doing things that used to be only done by humans. Cutting edge AI has been able to learn on its own through experimentation, and it has been able to come up with solutions to problems that were not considered by its programmers (this is what used to be called "creativity").

And there is no clear line between "my AI" and "your AI". There is Turing test.. which I accept (although it has some problems). The Turing test will be passed in the next decade or so using many of the same techniques we are using now to play chess, or recognize faces.

Other than the Turing Test (which is just a matter of degree) you haven't been able to articulate a testable distinction between "my AI" and "your AI".

I think my AI and your AI are the same thing.
djjd62
 
  2  
Fri 21 Aug, 2015 09:47 am
Is Artificial Intelligence Even Possible?

lets hope so, it seems that natural intelligence is an endangered species
0 Replies
 
Leadfoot
 
  1  
Fri 21 Aug, 2015 10:05 am
@maxdancona,
I'll accept passing the Turing Test as the distinction between our definitions. I'd even accept text exchanges between me and the machine.

But OTOH you have a point. I have met people who would be convinced that the machine was a person if all it could do was talk about football. Maybe the 'degree' you are talking about is just the percentage of people who the machine could convince. So by that standard, your AI is here today.
maxdancona
 
  1  
Fri 21 Aug, 2015 10:21 am
@Leadfoot,
The point I am trying to make is that it is a spectrum... and current AIs keep getting better and better at doing human tasks.

I think the interesting challenge is not intellectual ability, but the idea of "consciousness". The problem is that no one really knows how to define consciousness in a way that can be measured. Computers today can exhibit human emotions and creativity... somehow that doesn't the same as our experience.

Part of the problem is that we hold onto the idea that humans have a "soul" (even though there is no scientific evidence for it). The idea that the brain is just circuitry and that it can be replicated on a circuit board seems impossible... but I am not sure that means it is incorrect.

Would you accept that a computer that can pass the Turing test has a soul? Or is the soul just a myth anyway.
Olivier5
 
  1  
Fri 21 Aug, 2015 10:25 am
@maxdancona,
Being self-conscious is necessary to pass the Turing test.
maxdancona
 
  1  
Fri 21 Aug, 2015 10:43 am
@Olivier5,
Quote:
Being self-conscious is necessary to pass the Turing test.


No... being able to simulate self-consciousness is sufficient to pass the Turing test.

If a piece of software can accurately predict what a self-conscious human would respond using a mathematical model based on semantic data, it would be able to pass the Turing test.

Is this the same as being self-conscious itself?
Olivier5
 
  1  
Fri 21 Aug, 2015 10:47 am
@maxdancona,
You can't predict what a human will say based on semantics alone. You would need to know something about the real world, and what's in it, including humans, computers, etc. You can't codify it all into mathematics, contrary to chess.
 

Related Topics

 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/26/2024 at 12:59:12