11
   

Do you think AI (artificial intelligence) is something to be feared?

 
 
Krumple
 
  2  
Mon 18 Dec, 2017 08:54 pm
@maxdancona,
maxdancona wrote:
The terminology may be difficult. But I am trying to make an important point. AI system, as they exist with today's technologies, don't "try to find the most efficent systems" any more than my lamp "tries to light up the room". These are machines that go through a very specific process as designed by humans to fulfill a goal.


But my whole point in this was not to talk about what we have. I don't care about lamps having a single function. Actually they have two functions but still I am not talking about that.

This is why I am getting more and more skeptical that you do anything with AI. I highly doubt now that you have anything to do with AI.

You talk to me as if I have no clue what I am talking about. I'm not talking about ******* lamps and cars.

Perhaps you aren't a programmer so you don't know any of the key points I am hitting without actually referring to programming. So I honestly feel you don't actually work in AI at all period.
Krumple
 
  1  
Mon 18 Dec, 2017 08:58 pm
@maxdancona,
maxdancona wrote:
Does a desk lamp have motivation?


Really? Lamps don't have motivation? Well what the ****, how come I am just learning this now? I knew I shouldn't have bought the book, "Lamps are not as dumb as you think."

I want my money back..
0 Replies
 
Krumple
 
  1  
Mon 18 Dec, 2017 09:13 pm
@Krumple,
This is computer desire

CheckPowerLevel () {
if(power == low) {
// run find power function if power is low
FindPower();
}
}

FindPower () {
if(accessToPowerOutlet == true) {
// run connect power adapter to outlet function
ConnectPowerAdapter();
}
}

ConnectPowerAdapter() {
// insert power adapter into outlet for recharge
InsertPowerAdater();

}

All it needs to know is what an adapter is, what power is, how to locate a power outlet and have access to the data which it's current power level.

This is actually what we do all the time. You just don't internalize it.

What is this feeling I am having?
Oh it is hunger pains.
Maybe I should eat.
I have chosen to eat what is the next step in the process of eating?
Go to fridge.
Look inside and examine contents
What would you like to select?
If you don't want any of fridge contents then consider options.
Go to McDonalds for cheese burger.
ect ect ect.

We are programs, we are machines. We just like to flavor it by using words like Desire. Desire just is a very convoluted series of wants. But you want it to be more profound than that. No its not profound.
roger
 
  1  
Mon 18 Dec, 2017 09:18 pm
@maxdancona,
Max, I think you are doing a real good job of explaining this.
0 Replies
 
maxdancona
 
  0  
Mon 18 Dec, 2017 09:36 pm
@Krumple,
I am a Software Engineer working in AI (specifically Speech Recognition and Natural Language Processing). My career includes design and programming. You can insult me all you want. But I do know what I am talking about, and I am trying to bepatient to explain it to you. If what I am saying is not interesting to you... then there is no need for me to be wasting my time or yours.

The programming example you give is interesting, but underneath are a bunch of switches. It boils down to a machine that does exactly what it was designed to do. I don't know how the human mind works.... but the human mind does not work like a computer program.

AI is not magic. The current technology involves creating a mathematical module that is designed by human engineers. The computer comes up with a model (basically a set of parameters) that is complex ... but it is using a technique that is carefully crafted by engineers. That is what people like me do for a living. You may not believe this, but we aren't magicians. We are engineers using a specific set of processes to analyze data and create models.

I don't know what you are getting upset about. I am only here because you said that you were interested in discussing how AI (as currently used) works. If you are already set in your ideas about what is going on... then there really point in continuing this discussion.

You are free to believe whatever you want. And, you are free to swear at anyone who tries to explain to you why your beliefs might not be correct.

I have made the point that AI is just a mathematical process. It is not magic, and in its current form it is nowhere near creating a sentient machine that can have its own will any more than a desk lamp or a transistor radio. So unless there is some intelligent discussion (rather than ad hominems) I will leave it at that.
maxdancona
 
  1  
Mon 18 Dec, 2017 09:50 pm
@Krumple,
This is an interesting article....

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Quote:
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.


I ran across this idea a while ago, that comparing the human brain to a computer was just another step in a long line of humans mistakenly comparing to human brain to whatever the latest technology happened to be. I don't think this is the article I was first looking for.

But the human brain is not a computer.
Krumple
 
  1  
Mon 18 Dec, 2017 10:00 pm
@maxdancona,
maxdancona wrote:

This is an interesting article....

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Quote:
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.


I ran across this idea a while ago, that comparing the human brain to a computer was just another step in a long line of humans mistakenly comparing to human brain to whatever the latest technology happened to be. I don't think this is the article I was first looking for.

But the human brain is not a computer.



I am not saying the human brain is like a computer.

I was running through a process. That is it. It is completely irrelevant to me if the human brain is like a computer or not. I don't care.

I am referring to the system of steps involved in solving basic needs. If you were to actually write them out how you go about them, there is no mysterious leap from problem to solution. It isn't random. It's a distinct series of steps from problem to solution.

Now there might be multiple paths from problem to solution. So efficiency comes into play. Technically speaking the solution with the fewest steps might be considered the most efficient. Humans are typically lazy, it seems to be our go to selection. However; the most efficient isn't necessarily the one with the fewest steps.

Because you have to factor in what is involved in each step. If you were to compare the steps between each possible path to the solution, The one with the shortest number of steps might have within it's solution steps a step that is very difficult to complete.

Anyways. I still hold that you aren't involved in AI at all.

Keep coming up with these silly deflection ideas as if I think humans and computers are identical. Or that I think a lamp has motivation.

I still need to return that book for my money back. The book called, "Your lamp is smarter than you think." Ever since you convinced me that lamps don't have motivation.
0 Replies
 
Krumple
 
  1  
Mon 18 Dec, 2017 10:16 pm
@maxdancona,
maxdancona wrote:

I am a Software Engineer working in AI (specifically Speech Recognition and Natural Language Processing). My career includes design and programming. You can insult me all you want. But I do know what I am talking about, and I am trying to bepatient to explain it to you. If what I am saying is not interesting to you... then there is no need for me to be wasting my time or yours.

The programming example you give is interesting, but underneath are a bunch of switches. It boils down to a machine that does exactly what it was designed to do. I don't know how the human mind works.... but the human mind does not work like a computer program.

AI is not magic. The current technology involves creating a mathematical module that is designed by human engineers. The computer comes up with a model (basically a set of parameters) that is complex ... but it is using a technique that is carefully crafted by engineers. That is what people like me do for a living. You may not believe this, but we aren't magicians. We are engineers using a specific set of processes to analyze data and create models.

I don't know what you are getting upset about. I am only here because you said that you were interested in discussing how AI (as currently used) works. If you are already set in your ideas about what is going on... then there really point in continuing this discussion.

You are free to believe whatever you want. And, you are free to swear at anyone who tries to explain to you why your beliefs might not be correct.

I have made the point that AI is just a mathematical process. It is not magic, and in its current form it is nowhere near creating a sentient machine that can have its own will any more than a desk lamp or a transistor radio. So unless there is some intelligent discussion (rather than ad hominems) I will leave it at that.


I feel like you were trying to insult my intellect by asking if I thought a lamp has motivation. I thought you would understand why I was using the word motivation. I know a machine doesn't have feelings nor emotions. However the machine would have "needs" all machines have needs.

An AI unit would need power. If it can produce it's own power internally then it's needs for power are taken care of. No need to keep track of it.

However; maybe there are other needs.

The aspect is very very simple. All it needs is to know when it's need has occurred. Such as a sensor. Maybe it's lubrication or something.

If there is a need for lubrication. Such that the sensor turns a switch to on.

Which in programming is a booling, true false. That is a switch, on or off.

If it is triggered to true, then the AI knows to run the functions associated with obtaining lubrication.

We are really no different. We have sensors, pain sensors, hunger pain sensors, temperature sensors, ect ect ect.

When we feel cold, we run through a very quick series of solutions.

PROBLEM:
You are cold.

SOLUTION:
Deal with it. (problem solved??)

Find warmer spot.
Find more clothing.
Get out of the cooler.
Turn up the heat.
Steal your boyfriends jacket.

Ect ect ect.

It's a process that we do that is incredibly fast, we don't even really recognize it. The same for AI would be very similar.

The AI would have a list of needs. power, repair parts, task functions, ect

And it would have algorithms for each contingency.

But this takes out learning. Learning is a different process. Learning is being able to adapt one bit of information, a solution to a new problem.

Humans are extremely good at this. We have past experience, solutions that worked in the past, but as soon as we are given a new problem to solve we can utilize the past solutions to see if they solve our new problem. We do this very well.

The problem is, machines are terrible at this. Because the solution has a series of steps. But some times those steps are not relevant to the new problem. For us it's easy to omit a step, we are really good at leaving steps out that we find are unnecessary. But computer code is terrible at this. It takes a lot of redundant checks to see if we need this information or not. Or if a piece of information is useful to this new problem.

Learning is the ability to do this even when the two problems are drastically different.
CameronD
 
  1  
Tue 19 Dec, 2017 04:08 am
@Krumple,
YES IT IS.
Enabling a total control of their entire thought procedure is going to kill a lot of things. Employment of humans is definitely the first one to be harmed.
Apart from that, we can't feed emotion to them. They are barely going to judge the disability of a person.
Somewhere down, a bug will still survive to harm us badly.
As far as enhancement of technology is concerned, we should be proud of that.
But H2O you drink is the same that is a part of tsunami.
a "complete control of themselves" is going to kill us.
0 Replies
 
maxdancona
 
  1  
Tue 19 Dec, 2017 06:01 am
@Krumple,
Quote:
If it is triggered to true, then the AI knows to run the functions associated with obtaining lubrication.

We are really no different. We have sensors, pain sensors, hunger pain sensors, temperature sensors, ect ect ect.

When we feel cold, we run through a very quick series of solutions.


This is the crux of our disagreement. The human mind doesn't function anything like a digital computer. You are trying to see the digital computer as a metaphor for the human mind. The metaphor doesn't work because once you look under the hood what a digital computer is doing is very different than what a human mind does.

A computer is nothing more than a bunch of switches that all work in a predictable way. That is why I think comparing a computer to a lamp which has one switch... with possibly a sensor, some lamps know to turn itself on when the room is dark is far more applicable than comparing it to the human mind. I am trying to make this distinction... my aim isn't to insult your intelligence, but I don't know how to make this argument without pointing out that the computer is nothing more than a device. A human mind is much more than a device.

The fact is that I don't have a deterministic program in my head that controls my actions. Sometimes I jump out of bed, turn on the light and make myself some coffee. Sometimes I lie in the darkness for a bit thinking about how I am hungry. Humans are nothing like computers, we act truly spontaneously. We have desires, and emotions and a will. These can't be programmed.

When I am cold, I don't programmatically "run through a quick set of solutions". I just something without thinking about it... sometimes that involves grabbing a coat. Sometimes it means just saying "f*** it" and ignoring it. I don't go though a heuristic process where I analyze each solution and assign a mathematical score. As a human I just react based on whatever I feel.

Even when computers do human things, they don't do things the same. I work in speech recognition. A computer analyzes a sound wave, it builds a tree of possible things that someone might say. Then it runs through a statistical process to come up with a mathematical score that represents the likelihood of each guess. Then it picks the most likely (usually) and this guess comes with a confidence score which says how likely it is. Humans (unlike computers) assign meanings to words. We don't have statistical models. You can't tell me the mathematical score to represent the likelihood of a series of words. We have an emotional connection to words, and have mental images that all come into play as we are processing language.

Humans aren't computers.

Fil Albuquerque
 
  2  
Tue 19 Dec, 2017 08:16 am
@maxdancona,
When you speak of lamps for metaphor you might as well speak transistors which are the switches you allude to. Same goes for software/programs instructions, yes they do binary work on or off, go/no go...but it is amazing that you claim to work on AI because in the least whatever your work on seems to have nothing to do with recent dynamic programs like Deep Learning that improves its own algorithms ALONE. You're very own account of it shows you don't understand how they work. A page or two ago you clearly stated programs couldn't re program themselves and that statement is clearly false as I have demonstrated in one of the videos in the case of Deep Learning. This shows that either you are a 3 category engineer working at a very low level of programming and have no holistic grasp on how the system works or what is intent to do or you are straight lying when you claim to work with AI. Maybe you are just an old school programmer as I don't want to believe you would have the nerve of being a poseur just to gain some internet points. That would be degrading yourself to a whole new level!
maxdancona
 
  1  
Tue 19 Dec, 2017 08:50 am
@Fil Albuquerque,
"Deep Learning" is not magic. It simply is a mathematical process to build a model. A model consists of a bunch of parameters (i.e. numbers) in a system that we (i.e. the human programmers) set up based on our research and our goals. You are saying that the system is adjusting algorithms... you are kind of right if what you mean is that the system is doing data analysis to find the optimal set of parameters to get raise some metric.

It's not magic. And it isn't anything like what the human brain does. It has no will or desire or the ability to do anything other than go through an established process to adjust a set of parameters in an existing framework.

It is a little funny that you want to judge my career from wherever you are sitting. I get that you want to believe in magic... and that my reality and experience doesn't match up with your beliefs and hopes. But that is what I do and if you want to understand what is going on, then you are going to be disappointed.

On YouTube you can find people who are trying to make their work sound like magic, often they are telling the truth but in as exciting way as possible. Sometimes these experts are stretching the truth and sometimes they are full of crap. You don't really gain any experience or useful insight from YouTube videos. I am sorry, but that is the facts. If you want to understand this, you should be reading about the math and how this process is actually done. Once you do this, you will see that this isn't magic.

You seem to really want your ideas to be true and you accepting anything that supports them and attacking anything that questions them. That in itself is a uniquely human trait.
Fil Albuquerque
 
  1  
Tue 19 Dec, 2017 09:11 am
@maxdancona,
"you are kind of right if what you mean is that the system is doing data analysis to find the optimal set of parameters to get raise some metric."

...exactly this and who said it was voodoo???
0 Replies
 
vikorr
 
  2  
Tue 19 Dec, 2017 05:28 pm
I think Max is doing a fine job of explaining why 'AI' (as in sentient AI) won't work.

Every point he has made so far, he has appears to have broken ideas down into it's simplest form. Rather than being evasive, to me it shows a clarity of mind. That sort of clarity would actually be needed if you want to program AI

If you choose to look at that as an 'insult to anothers intelligence', well, that's up to the individual.

As a note, I haven't studied programming. I just understand enough about it to know the necessity for:
- understanding standalone structure,
- understanding interacting structures; and
- clarity of thought (which includes the ability to break things down into its simplest components)

I'm guessing there are programming terms for such Smile
Krumple
 
  1  
Tue 19 Dec, 2017 05:42 pm
@vikorr,
vikorr wrote:
As a note, I haven't studied programming. I just understand enough about it to know the necessity for:
- standalone structure,
- interacting structures
- clarity of thought (which includes the ability to break things down into its simplex components)
Please excuse me if those aren't programming terms


Further explanation please. A serious request.

What do you mean by your list here?

Standalone structure? Vague, I don't see any correlation. Or what you are trying to connect or state here.

interacting structures? A little less vague, but I can only assume a few possible meanings here. I would assume an interaction with reality or internalizing of reality in a useful way. But still unsure what you really mean by it. Please explain as well.

clarity of thought? It's a little ironic since you failed at this yourself. Yet you want to suggest that you are more intelligent than any possible AI could ever be? Perhaps not. Please explain.
vikorr
 
  1  
Tue 19 Dec, 2017 06:17 pm
@Krumple,
Uh...

Language is structured. You need to understand the structure of your language in order to be able to speak/program it. Otherwise you spout nonsense, even if you know the individual meaning of the words.

A simple (or standalone) vs interaction structure is somewhat like:
- a building vs a city
- a shop vs an economy
- a HDD vs a computer
- a fuel injector vs a car.
I haven't got those quite perfect, because each of the 'simple' examples has their own simple vs complex within that 'simple example'.

-----------------------------------------

So you want to build an articulated arm with a magnetic tip that lifts a steel ball in a known location...drop arm till magnet connects, lift. You probably didn't even need to use more than one part of the articulation. But even so, programming such would take structure.

But what if you want to build an articulated robotic arm (with claw) that throws that same steel ball, with the ability to detect where the ball is?
- Use sensor, program sensor to recognise ball
- integrate sensor with arm controller
- arm controller must properly articulate the arm to place claw just above ball
- claw must function correctly to grab ball
- arm must make the throwing motion (which can be a complex movement, requiring a lot of interaction)
- correct power for distance thrown must be applied (how?)
- release at optimum point (requires precise interaction)

I've missed a lot of steps in order avoid it being too long winded. The point being, this is a very simple example, yet such an arm would still be very complex to program (it can be done, which isn't the point - you asked what I mean by the list)

I'm quite sure you understand this, so I'm not sure what motivation lead you to ask.

--------------------------------------------------------------------

Quote:
clarity of thought? It's a little ironic since you failed at this yourself. Yet you want to suggest that you are more intelligent than any possible AI could ever be? Perhaps not. Please explain.
I didn't claim clarity of thought - I said it's needed for programming. Nor did I claim to be more intelligent that any AI. Nor can you show anywhere I have even come close to saying such.

And once more, I'm not sure what motivation led you to the above.

Krumple
 
  1  
Tue 19 Dec, 2017 06:36 pm
@vikorr,
vikorr wrote:

Uh...

Language is structured. You need to understand the structure of your language in order to be able to speak/program it. Otherwise you spout nonsense, even if you know the individual meaning of the words.

A simple (or standalone) vs interaction structure is somewhat like:
- a building vs a city
- a shop vs an economy
- a HDD vs a computer
- a fuel injector vs a car.
I haven't got those quite perfect, because each of the 'simple' examples has their own simple vs complex within that 'simple example'.

-----------------------------------------

So you want to build an articulated arm with a magnetic tip that lifts a steel ball in a known location...drop arm till magnet connects, lift. You probably didn't even need to use more than one part of the articulation. But even so, programming such would take structure.

But what if you want to build an articulated robotic arm (with claw) that throws that same steel ball, with the ability to detect where the ball is?
- Use sensor, program sensor to recognise ball
- integrate sensor with arm controller
- arm controller must properly articulate the arm to place claw just above ball
- claw must function correctly to grab ball
- arm must make the throwing motion (which can be a complex movement, requiring a lot of interaction)
- correct power for distance thrown must be applied (how?)
- release at optimum point (requires precise interaction)

I've missed a lot of steps in order avoid it being too long winded. The point being, this is a very simple example, yet such an arm would still be very complex to program (it can be done, which isn't the point - you asked what I mean by the list)

I'm quite sure you understand this, so I'm not sure what motivation lead you to ask.

--------------------------------------------------------------------

Quote:
clarity of thought? It's a little ironic since you failed at this yourself. Yet you want to suggest that you are more intelligent than any possible AI could ever be? Perhaps not. Please explain.
I didn't claim clarity of thought - I said it's needed for programming. Nor did I claim to be more intelligent that any AI. Nor can you show anywhere I have even come close to saying such.

And once more, I'm not sure what motivation led you to the above.




The process for recognition doesn't exactly work in the sense of a tree and a forest.

The funnel for the data is best when it is accessed through the objects function instead of what it is.

Here is an example.

A door. What is a door? We can define it using a dictionary definition but it doesn't say what it's function actually is.

So you lets assume you are completely stupid. You have no clue what anything is. You need to be able to identify a door. How do you determine what a door is? Let's say you are placed in a room with multiple objects.

However to toss a monkey into the wrench. There is a table in the center of this room. So if you attempt to be clever by saying, I would look for a door knob, only door knobs are on doors. Therefore if you locate a doorknob then that is where the door is.

But on this table in the center of the room has a doorknob attached to it. There are also hinges attached to the table top. Is the table top the actual door? Well currently the table top is functioning as a table top, not a door. So how can it possibly be both a table top and a door?

This is why we classify objects not by what they are but instead what their function is. It is irrelevant what the object is.

Another quick example is a car. If the motor is removed, is it still a car?

The function changes between a car with a motor and a car without a motor. Their functions change. This change in function is how you properly identify the object.

So going back to a building or a city.
Or a shop vs an economy.

We don't categorize them in that way. Instead it's the function that identifies them.

A building doesn't function like a city. That is what determines their difference. Same is true for a shop vs the economy.

This is how you define things for a machine. Not on nouns, or pronouns. But instead by function. What does or what should the object do?

If the object does not behave as the function defines then that object is not what it has been claimed to be.

So a picture of a door painted on a wall with a doorknob attached to the picture. Is this a door? It looks like a door. It has a door knob. But can't it be "opened", can it "move"? Well you might be able to move the picture, so is the picture really a door or not?
vikorr
 
  1  
Tue 19 Dec, 2017 06:47 pm
@Krumple,
Hi Krumple,

I understood many of the difficulties you pointed out...I was trying to avoid them, because of how long winded they are.

And I wouldn't have the foggiest idea how they managed to program picture recognition etc.
Krumple
 
  1  
Tue 19 Dec, 2017 06:58 pm
@vikorr,
vikorr wrote:
And I wouldn't have the foggiest idea how they managed to program picture recognition etc.


Well I won't say it is easy. But even a picture has characteristics which determine it's function. Do pictures change over time? They might degrade but do they change?

Like if you were to have a picture painted on the wall of a window with it's frame. Then there are objects painted to look like objects "outside" the painted window. Such as a tree, a parked car, ect.

Right next to this painted window you have an identical window with it's framing. They look incredibly similar. Even the objects are similar. The same tree and same parked car. Same colors.

How would the machine go about determining which is the actual window and which is the actual picture? Even humans can be fooled, if the painting is done really well, where you can't notice the nuances in depth or light refraction.

The ONLY real way to determine them would be to wait, perhaps for ever to see which one changes over time. The one that changes is the window, but why? Because it is actually reality where as the painting is not. So if a bird happens to fly by the window, the machine would notice a "new" object has appeared. It doesn't even have to recognize it as a bird, just that there is a new object but the picture hasn't had any new objects.

This might sound childish or too simplistic. But this is the frame in which machines recognize objects. In facial recognition. Even in voice recognition.
0 Replies
 
Krumple
 
  2  
Tue 19 Dec, 2017 07:04 pm
@vikorr,
Alexa gone wrong.
0 Replies
 
 

Related Topics

Cleverbot - Discussion by Brandon9000
What are the odds... - Discussion by tsarstepan
The Future of Artificial Intelligence - Discussion by Brandon9000
People are stupid. - Discussion by tsarstepan
Can you spot the likely AI Bot Poster? - Question by tsarstepan
AI in Medicine - Discussion by rubberduckie2017
 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.14 seconds on 11/23/2024 at 10:45:51