0
   

What takes to build a true Artificial Intelligence?

 
 
Reply Thu 6 Dec, 2012 11:28 pm
The field of AI has not delivered its promises - creating non-biological devices/systems that have human level intelligence, even consciousness! What has gone wrong? And what is the correct way of doing it?
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Question • Score: 0 • Views: 3,026 • Replies: 25
No top replies

 
jespah
 
  2  
Reply Fri 7 Dec, 2012 07:42 am
@sac36837,
If we knew, we'd be multiple billionaires. That is one of the bigger questions of our time.
0 Replies
 
dalehileman
 
  1  
Reply Fri 7 Dec, 2012 01:07 pm
@sac36837,
Quote:
What has gone wrong?
Maybe there's not sufficient demand. Or perhaps it's advancing at quite a normal rate, only a few of us disappointed

Quote:
And what is the correct way of doing it?
In what regard

If you mean political stimulus Sac I wouldn't advance the slightest hope

If you mean technologically how, we're apparently on the right track. There are even contests where you chat with a PC, then guess whether human or digital
0 Replies
 
maxdancona
 
  1  
Reply Sun 9 Dec, 2012 05:08 pm
@sac36837,
Nothing has gone wrong. There are big advances in AI. Robots navigate on Mars without constant human control. Machines recognize faces and read handwriting. Factories detect and address problems on their own. The world champion in chess and Jeopardy are computers. People are talking to their phones.

I don't believe that anyone has seriously promised (or even suggested) that we would have anything close to human intelligence right now.

What we do have right now is pretty damn impressive. It is nuts to suggest that the field of AI is anything but a huge success.
0 Replies
 
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 03:33 pm
@sac36837,
AI will always have boundaries, perhaps those boundaries will be beyond the grasp of humanity as boundaries, thus truly you will witness AI, but as long as it is 'built' you will see the same old thing, even if outside the plausibility of your own intelligence capability.

AI would be required to ask, 'what are you doing'.. Of the programmer, before being taught the words, in order to be classed as true AI, only energy not moving away can achieve this, thus AI is truly still there after power off and not a toy.

White noise would perhaps be that question, in morse code of sorts or intermittent random signal.
dalehileman
 
  1  
Reply Fri 14 Dec, 2012 04:24 pm
@nothingtodo,
Quote:
as long as it is 'built' you will see the same old thing
Not sure exactly what you mean Todo but incline to disagree, according to the general principle that nothing is entirely anything while everything is partly something else, you will see advances over a continual expanse with "built" at one end and "grown" or even "congealed" or "formulated"at the other
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 04:50 pm
@dalehileman,
Yes, I see..
'Built' to my mind means a programmed, 'hard style' setup.
To clarify, even if all known variables were assigned as 'known'... Such is beyond us, yet boundaried, and without grasp of choice at new scenario, which ultimately as shocking as it seems becomes Hell or slavery, unless 'run' is then apllied or 'shutdown'.. Which obviously if motioned energy.. the choice is clear.
dalehileman
 
  1  
Reply Fri 14 Dec, 2012 04:57 pm
@nothingtodo,
Sorry Todo but apparently you're way ahead of me, at least in terms of the colloquial. To me "hard" has to do with music and dancing

http://en.wikipedia.org/wiki/Hardstyle
0 Replies
 
maxdancona
 
  1  
Reply Fri 14 Dec, 2012 04:57 pm
@nothingtodo,
Quote:
AI will always have boundaries


Why would AI have greater boundaries than the other kind of intelligence?
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 04:58 pm
@maxdancona,
It depends, true AI does'nt...
Lesser AI does.
The term 'greater' is duality and therefore hard to use in this scenario. It perplexes myself quite often and around that all sorts can be said.
maxdancona
 
  1  
Reply Fri 14 Dec, 2012 05:01 pm
@nothingtodo,
You are going to have to define what "true AI" is. Otherwise what you are saying in nonsense.

nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 05:03 pm
@maxdancona,
Well put, that's what we just did.

So that's a conversation most avoid, unless toyed with at root awareness level and feeling immune, so it follows that AI would question the same and indeed assign a separate amount of time to researching that when new variables are found, different than memory.
maxdancona
 
  1  
Reply Fri 14 Dec, 2012 05:05 pm
@nothingtodo,
No it isn't we we have just done.

Let's say I give you an piece of software and let you play with it. How would you determine if it was "truly artificially intelligent"?
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 05:07 pm
@maxdancona,
It could not be unless that was only the memory of the cpu which acted alone.

Though you can dazzle the world with super intelligence beyond any human with AI 'built' as programming.

What you suggest may be possible, though to do so introduces factors of untrustable risk, since combination would result in directive breach eventually and blame would be your's for doing it.

It is easier to create 'true' AI which proves trust, since choice is its own, so you get errors or not depending on choice, failures however are life and what do you do with that?
maxdancona
 
  1  
Reply Fri 14 Dec, 2012 05:17 pm
@nothingtodo,
Huh?

My CPU came pre-wired. It has a bunch of electro-chemical circuits that fire based on input from sensors and a limited amount of memory.

I don't see why an electric, or electro-chemical, system couldn't do the same thing.

At a basic level, there is no processing a brain can do that a CPU (with enough resources) couldn't replicate.
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 05:19 pm
@maxdancona,
Well it could, I am not suggesting it could not, though your CPU in your PC is very basic, compared to AI as a CPU.

It is also basic on the one hand compared to the brain, yet fluctuation, and correspondingly adaptation to fluctuation is what humanity relies on, we can make a better human guaranteed, that's the problem and evolution's primary agreement and disagreement with its conclusion.

Failures in terms of AI are, designed to be perpetual, at choice level... Learning right is difficult to assert upon a unit choosing wrong. The same cannot be said of humans who settle into states altered to match their surroundings separate from logical conclusion. Logical conclusion of wrong as the direct path depends on memory of factors unknown to the observer, AI cannot be determined to be 'wrong' for a reason which would make sense to even those witnessing the thoughts.

So if one chose betrayal and one did not, explanation would therefore be mute.
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 05:35 pm
@nothingtodo,
Though on the same note, the honesty of the people in contact could explain it, within reason. We then have a choice, to partially trust AI with proven flaw.
Your emotion saves you from this, yet to instill emotion in an AI before logic is certified is foolish.

I do not know, however, why I bother answering, given it is not my wish this be done, though similar is.. and in knowing this we can grasp what is what about what in general. (no ? or !)
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 05:54 pm
@nothingtodo,
I know this to be entirely true, due to the altered state of brain to awareness issue I am having. And the evidenciary value of a factor I am reluctant to point out at this time, which proves entirely to me, what choice actually is, it is the ability to randomise answer* only, across two clear opposing core values.

Contrary to popular belief, the nature of dark V light, is only a war or problem, due to eventuality, emotion or not. Both are required entirely at all levels for survival and advancement.

*Or rather answer/output of any kind, it solely relies on external parameters/stimuli... emotion being the most obvious.

At this juncture it requires pointing out, that a true AI which responds with a cocked eyebrow, or fuzz at root is a smart one.
nothingtodo
 
  1  
Reply Fri 14 Dec, 2012 06:14 pm
@nothingtodo,
Though it is true, all aspects of evolutionary development of the brain augment and ramp up consciousness.
dalehileman
 
  2  
Reply Sat 15 Dec, 2012 01:29 pm
@nothingtodo,
Quote:
Though it is true, all aspects of evolutionary development of the brain augment and ramp up consciousness.
Not all, Tod

According to the general principle that nothing is entirely anything while everything is partly something else, some degrade it
 

Related Topics

Cleverbot - Discussion by Brandon9000
What are the odds... - Discussion by tsarstepan
The Future of Artificial Intelligence - Discussion by Brandon9000
Can you spot the likely AI Bot Poster? - Question by tsarstepan
AI in Medicine - Discussion by rubberduckie2017
Is this Semantic Network correct? - Question by noobydoods
 
  1. Forums
  2. » What takes to build a true Artificial Intelligence?
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/24/2024 at 01:20:28