17
   

Is Artificial Intelligence Even Possible?

 
 
Leadfoot
 
  0  
Fri 21 Aug, 2015 11:04 am
@maxdancona,
Quote:
@Leadfoot,
The point I am trying to make is that it is a spectrum... and current AIs keep getting better and better at doing human tasks.


That's all true. But the thing we are being warned about is the 'singularity' you mentioned. The point at which the machine becomes intelligent enough to improve itself independent of human help and then starts deciding on its own tasks. Something like an AI being tasked to calculate the path of climate change and then deciding that the only logical thing to do is to extinguish human life.

I'm more afraid of a human making that decision. The machines don't scare me at all.
maxdancona
 
  1  
Fri 21 Aug, 2015 12:11 pm
@Olivier5,
To pass the Turing Test, all the computer has to do is simulate what a human will say. The goal of the Turning Test is to fool you (as part of the panel of of experts), it doesn't require human thinking or human knowledge. All it requires is the ability to make responses that fool you into to thinking they require human thinking or knowledge.

I am going to go out on a limb here and assume that I am passing the Turing Test (i.e. that you believe based on my responses that I am a human). You probably didn't predict my responses... not perfectly anyway. All you can do is consider this response you are reading and compare it to your model of what things a human (in this case named "Max") would likely say.

There is no reason that formulating a simulated human response that is good enough to make it impossible for you to tell it was made by a human can't be done by an algorithm backed by a very good semantic map.

You are saying that it can't be done... but you aren't giving me any reason that it can't be done. I don't see why you would believe that this isn't possible.

maxdancona
 
  1  
Fri 21 Aug, 2015 12:15 pm
@Leadfoot,
I think the word "singularity" is a silly word (and I am not alone in thinking this). I believe the term was invented by Kurzweil. I don't believe it is a serious idea in AI research.

The idea that there is some "line" that will be crossed doesn't make any sense. To have such a line, you need to have a black and white distinction between intelligent and non-intelligent.. there is no such black and white distinction. There already is software that learns and adapts itself to deal with changing environments and tasks. It gets better and better... and the important distinction now is whether it is useful or not useful (not whether it is intelligent or not).

The Turing Test is a serious idea in AI. But, we all understand that this is an imperfect test.
Leadfoot
 
  0  
Fri 21 Aug, 2015 12:31 pm
@maxdancona,
Quote:
@Leadfoot,
I think the word "singularity" is a silly word (and I am not alone in thinking this). I believe the term was invented by Kurzweil. I don't believe it is a serious idea in AI research.

I agree with you on the 'singularity' but what does it take to convince you that it is taken as a serious idea. There is at least one foundation with many well known IT experts and 7 figure budgets dedicated to finding an answer to this singularity 'problem'. They absolutely believe it is a far more likely problem than nuclear war or climate change.
maxdancona
 
  1  
Fri 21 Aug, 2015 12:34 pm
@Leadfoot,
I work in speech recognition (which is AI). The people I work with scoff at the idea as I do (that doesn't mean that everyone does).

If some foundation offered me a cushy job with a 7 figure budget to study the "singularity", I very well may change my mind. I would still privately wish that the concept were better defined.
Leadfoot
 
  0  
Fri 21 Aug, 2015 12:41 pm
@maxdancona,
You got me there. But why would a guy like Bill Gates climb on board that clown car. He wouldn't cross the street to pick up a 7 figure check. There must be something else at work.
maxdancona
 
  1  
Fri 21 Aug, 2015 01:03 pm
@Leadfoot,
You are confusing two points here, Leadfoot.

What I have argued is that imagining some definite line from "AI" to "true AI" is ridiculous because there is no clearly defined line to cross. The line is an invention. Instead I am arguing that progress in AI is along a spectrum and will continue to progress into the future and technology gains the ability to do more and more.

I have never argued here that AI isn't dangerous, or that safeguards shouldn't be put into effect.

In Kurzweil's dark vision, in the year 2080 after the last organic human has been disposed, the robot elders won't be able to put a specific date when they became intelligent. They will accept that it was a gradual process of development over decades. There will be no singularity for machines any more than humans can point to the day when our consciousness first awakened.

Human scholars look at the photosensitive spots on amoebas and think... this is the beginning of the human neurological system that makes us conscious. Robot archaeologists might look at the speech recognizers we have now in the same way.
Leadfoot
 
  0  
Fri 21 Aug, 2015 01:14 pm
@maxdancona,
Quote:

You are confusing two points here, Leadfoot.

And you are confusing only one. You seem to think that after I understood what you mean by AI that I disagree with you.
0 Replies
 
rosborne979
 
  1  
Fri 21 Aug, 2015 08:16 pm
@maxdancona,
maxdancona wrote:
Can you please define what you mean by "true AI" in a way that is testable?

That's a tough question Max. In my experience, most programmers use the phrase "True AI" in casual conversation when they want to cut around all the philosophical hand-wringing about the definition of Intelligence and get right to the meat of the actual discussion or question being asked.

For me, I think of True AI as any type of machine which I cannot distinguish from a human being in an extended interaction.
Finn dAbuzz
 
  1  
Fri 21 Aug, 2015 09:19 pm
@Leadfoot,
Not having all the prior responses:

Yes, it certainly is possible and it will eventually come to pass.

The only question for humans is whether or not we can program an AI not to destroy us.

I don't think that's really much of a problem.

Let's assume the AI can break through our programming Why would it want to destroy us?

The silly Star Trek notion of computers that decide mankind is an objective toward universal peace is just that.

An AI will be intelligent enough, by definition, to know how much it know and how much it does't It will, I think, look to the other intelligent entities for assistance.

One isn't going to be bound on destroying the other.

We will advance.
0 Replies
 
Olivier5
 
  2  
Sat 22 Aug, 2015 01:33 am
@maxdancona,
Quote:
There is no reason that formulating a simulated human response that is good enough to make it impossible for you to tell it was made by a human can't be done by an algorithm backed by a very good semantic map.

I think a semantic map would be far from enough. You'd need:

- syntax;
- a capacity to emulate different registers of speech (slang, casual, polite, formal, scientific, etc.);
- a capacity to emulate the right set of emotions (which can be heard through human speech) in the right circumstances;
- some sense of humour;
- access to a vast corpus of texts (such as a wikipedia on steroid) because any human has a rather vast knowledge of the world, so to pass the test the IA needs to answer questions like "when was the last time you saw an elephant climbing on a rainbow?" - to which one cannot respond properly without some documentation on elephants and rainbows;
- in fact the IA could not answer the elephant-on-rainbow question without some sense of what an elephant IS and what it can and cannot do. Hence some modeling of the real world is also necessary.


Olivier5
 
  1  
Sat 22 Aug, 2015 02:41 am
@Olivier5,
I mean "AI" of course.
0 Replies
 
nacredambition
 
  0  
Sat 22 Aug, 2015 03:22 am
@rosborne979,
Quote:
get right to the meat


Setanta
 
  2  
Sat 22 Aug, 2015 05:57 am
@Leadfoot,
No, i did not say that intelligence is related to precision and speed. It's your straw man, and your response shows that you lack basic debating skills, as well.

Orbital insertion when approaching the earth is far easier than orbital insertion when approaching Mars, which has a much smaller circumference, and a much smaller atmosphere. The sophistication of the computers controlling such a function has absolutely nothing to do with whether or not they will control the various parts of the guidance and propulsion systems in a precise and rapid manner. You really don't get this at all. If a computer has been properly programmed, it's relative "computing power" is meaningless--as long as it has a sufficiency, and has been properly programmed, it will perform to expectation.

GIGO, buddy--if necessary, look that up.
0 Replies
 
Setanta
 
  1  
Sat 22 Aug, 2015 06:32 am
This is what i originally wrote about Martian orbital insertion:

Quote:
For example, Martian orbital insertion requires such precision that some people believe that a human pilot couldn't handle it, and that only a sophisticated AI could accomplish it.


The ability to perform an orbital insertion depends upon the ability to control the guidance and propulsion systems--the attitudinal rockets and the main drive rockets. The point about an AI controlling it has little to do with "computing" power and everything to do with programming. So long as it can do the necessary computations and send the necessary commands to the guidance and propulsion systems, a Commodore 64 can handle that. The quality of the programming, the quality of systems control and the quality of remote sensing systems (Hey! Is that a collision course with Phobos?) matter far more that so-called "computing power."
Leadfoot
 
  0  
Sat 22 Aug, 2015 08:40 am
@Setanta,
Well, you have convinced me that by your definition of AI, I'm dealing with AI every time I run GT6 on my PS3. Which is fine. It's just not the thing the AI alarmists are talking about.

I'm still not afraid of either definition. One because it's just an extension of very useful technologies (programable machines) and the other because I don't think they even know the nature of in order to emulate.
Leadfoot
 
  0  
Sat 22 Aug, 2015 09:08 am
@nacredambition,
Quote:
Quote:
get right to the meat

That vid was thought provoking.

I wonder why atheists can envision meat life emerging from water but not god life from quarks?
Setanta
 
  1  
Sat 22 Aug, 2015 11:25 am
@Leadfoot,
I did not say that you or anyone else sould be afraid of artificial intelligence. It was never any intention of mine to even take notice of silly paranoia. You seem to be just another theist making extravagant and sneering claims about atheists, as though they all think exactly alike. Yeah . . . right . . .
0 Replies
 
nacredambition
 
  0  
Sun 23 Aug, 2015 12:15 am
@Leadfoot,
Quote:
I wonder why atheists can envision meat life emerging from water but not god life from quarks?


Is it because "god life from quarks" is an artifice , not an AI ?
Leadfoot
 
  1  
Sun 23 Aug, 2015 01:40 pm
@nacredambition,
Quote:
Quote:
"I wonder why atheists can envision meat life emerging from water but not god life from quarks?"

Is it because "god life from quarks" is an artifice , not an AI ?


On what grounds do we know that?
Perhaps "meat life from water" is the actual artifice.
0 Replies
 
 

Related Topics

 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/25/2024 at 06:19:03