11
   

Do you think AI (artificial intelligence) is something to be feared?

 
 
Krumple
 
  1  
Sun 17 Dec, 2017 09:06 pm
@Fil Albuquerque,
Fil Albuquerque wrote:

[youtube]https://www.youtube.com/watch?v=5qfIgCiYlfY[/youtube]
[youtube]https://www.youtube.com/watch?v=tlS5Y2vm02c[/youtube]
[youtube]https://www.youtube.com/watch?v=4l7Is6vOAOA[/youtube]


Interesting videos, I have seen them before. But I still think it only considers one aspect of the entire concept.

I can play around with jokes about a self improving AI that exponentially improves it's intelligence to all of a sudden shut itself off. (commits suicide) Purposely writes code to delete itself. Why? I don't know, its just a joke to bring up a point that what we want to speculate on, that it would do, is completely just that, speculation.

We can scare ourselves with all sorts of doomsday scenarios that we want to entertain. But we really won't know what will happen and what the impacts really are until they actually happen. It could be bad or it could even be benign. Then we have a conversation with this AI an say, we were worried that if we created you, that you would ultimately destroy all humans. The AI laughs and gives a profound reason as to why that would be stupid. Perhaps a reason we had not even considered ourselves as to why that it would not be in the AI's best interest to destroy all humans.
0 Replies
 
Fil Albuquerque
 
  1  
Sun 17 Dec, 2017 09:08 pm

maxdancona
 
  1  
Sun 17 Dec, 2017 09:17 pm
@Fil Albuquerque,
Fil, why are you posting a video that completely debunks what you are saying on this thread? Do you even watch the videos before you link them... or are you engaging in a random google search and link operation to overwhelm readers.

This is funny. Just go and listen to the first two sentences of what this man is saying. He is making fun of you Fil.
0 Replies
 
maxdancona
 
  0  
Sun 17 Dec, 2017 09:36 pm
No they don't Fil. You are stuck on science fiction. You don't even pretend to have any clue on how deep learning actually works. If you did, then we could have an intelligent conversation on the topic.

I am responsible for "fracking up" the economy Fil. The software that I work to design takes jobs away from human beings... the software I write does a set of jobs that used to be done by humans. However, this in this way I am no different than the engineers before me who built printing presses, steam engines or textile machines. You don't know what you are talking about. For some reason you seem to be confusing expertise, that comes from actual experience, with "stupidity".

You get your knowledge from YouTube videos. I get mine from actually working in the field. If one of us is stupid, I don't think it is I.







maxdancona
 
  2  
Sun 17 Dec, 2017 09:42 pm
@maxdancona,
If you want to be afraid of deep learning... you shouldn't be afraid of the software. Be afraid of the programmers who write it and the people they work for.

The real danger is not artificial intelligence. The real danger is the power that the ability to analyze massive amounts of data on nearly everyone gives to human beings. Right now, there is data that is being collected on you that will tell the human beings that control it things about every aspect of your life. They know who you will vote for, what you will buy, what your vices are, who your friends are and what messages will be likely to influence you.

Don't fear machines. Fear the humans behind them.
0 Replies
 
maxdancona
 
  1  
Sun 17 Dec, 2017 09:48 pm
Quote:
I actually saw how deep learning worked in the first weak


What did you see? I am curious what you are talking about. How do you think deep learning works? I would like to think that "seeing how deep learning works" involves taking the time to understand the mathematics and algorithms behind what the computer is doing. This would mean that you spend some time studying linear algebra, right?

Actually, nearly everyone here has experience with with systems that depend on machine learning. It is common technology. Many of us have used the new digital assistants from Google and Amazon that nearly everyone carries on their phone. These assistants understand human speech, and can most of time figure out what you want by what you say.

This is machine learning. The data from your phone from your voice, to whether you clicked on an answer... and to whether the result got you to interact with advertising... is pushed up to servers. Then machine learning systems are applied with a variety of goals. First they have greatly improved the understanding of human speech in a large number of languages, dialects and even accents. Second, they are learning to return better results to get more clicks.

Thirdly ( and perhaps most important) they have learned how to sell you advertisements. Because, after all, getting clicks for advertisers is the most important goal of machine learning these days.... and people like me are doing a very good job at that.
0 Replies
 
Krumple
 
  1  
Sun 17 Dec, 2017 09:50 pm
@Fil Albuquerque,
Fil Albuquerque wrote:

[youtube]https://www.youtube.com/watch?v=tcdVC4e6EV4[/youtube]


One more possible outcome of the stamp collecting AI. It doesn't do anything at all. It just sits there. When requested why it hasn't done anything, it's response is, "All the stamps have already been collected." Problem accomplished.

Now if you can't recognize my humor here or what my statement even means, I'll explain.

The definition of collect has vague connotations. When we talk about the word collect we are referencing concepts that we mostly take for granted. We make assumptions about what the person means when they say they collect stamps.

However; how do you qualify the word "collect"?

Does it mean that you need to bring all the stamps to one specific location? Or is the Earth the size of the "collection" location? If the Earth is all you need then by all means, all of the stamps have already been collected. You don't need to move them anywhere. The job is done.

The point here is that it's similar to asking three wishes from a smart ass genie. If you don't ask the question properly the genie might use your words against you.

Three men are stranded on an island.
One day they find a genie bottle on the beach and let the genie out who offers one wish to each of the three men.
The first man says that he wishes he was back home.
*POOF* the man disappears from the island.
The second man says, I wish that I was back home, but rich too!
*POOF* the second man disappears from the island.
The third man thinks for a moment, not sure what to ask for.
Then he says, I wish I was with my friends again.
*POOF* the two men reappear back on the island.
Fil Albuquerque
 
  1  
Sun 17 Dec, 2017 09:55 pm
@Krumple,
In computing lingo "collecting" is an actual very real function, not a mere vague concept! You don't start a program without clarifying what it is suppose to do. But I got the gist of your point...
0 Replies
 
maxdancona
 
  0  
Sun 17 Dec, 2017 10:03 pm
If anyone wants to talk about what is actually happening in the field of AI (specifically machine learning) with someone who actually has worked with machine learning algorithms, I will be here. It doesn't seem like anyone wants to talk about reality.

So, unless anyone show interest in what is real, I will let these two people talk about their science fiction ideas based on YouTube videos.

This thread is silly.
Krumple
 
  2  
Sun 17 Dec, 2017 10:14 pm
@maxdancona,
maxdancona wrote:

If anyone wants to talk about what is actually happening in the field of AI (specifically machine learning) with someone who actually has worked with machine learning algorithms, I will be here. It doesn't seem like anyone wants to talk about reality.

So, unless anyone show interest in what is real, I will let these two people talk about their science fiction ideas based on YouTube videos.

This thread is silly.



Silly? Hey quit stealing my words.

I am honestly interested. I don't know what your background is specifically but I am genuinely interested in any input you have. Regardless of how you feel about "youtube" which for some reason seems to offend you, I don't have any other means of taking in data on the topic. Sure I can read the New York times writers opinions but then I get a **** load of nonsense sprinkled in to an unnecessary narrative.

The thing is, it doesn't matter if AI with sentience is 100 years or even a 1000 years a way. It's irrelevant. Because that length of time is a blink of an eye in terms of the universe. Sure for US living right now it might not impact our lives at all, which I personally find saddening if it takes that long but hey maybe I'm asking for the devil to rear it's head. Once the genie is out of the bottle it's not so easy to get it back in right?

So I am fascinated by anyone's input on the concept of AI and all its flavors. Weather its just simple machine carrying out a tedious task to all out indistinguishable from another human.

Am I being naive? Maybe but I don't care. I want more input.
Fil Albuquerque
 
  1  
Mon 18 Dec, 2017 08:10 am
...way back in 2013...
0 Replies
 
Fil Albuquerque
 
  1  
Mon 18 Dec, 2017 08:41 am
...back to present day:

Fil Albuquerque
 
  1  
Mon 18 Dec, 2017 09:17 am
@Fil Albuquerque,
...back to 2014 the not so optimistic view:
(trying to be as unbiased as possible with opinions from experts on all sides portraying the all spectrum)

0 Replies
 
Fil Albuquerque
 
  1  
Mon 18 Dec, 2017 09:45 am
A more clear vision on the problem at hand, what AI can and cannot do, for now, and how Deep Learning will change our lifes and our economy!

0 Replies
 
Fil Albuquerque
 
  1  
Mon 18 Dec, 2017 09:59 am
...and more:

0 Replies
 
maxdancona
 
  1  
Mon 18 Dec, 2017 06:53 pm
@Krumple,
Thanks Krumple.

The ability of smart humans to predict the future has been spotty at best. Sometimes we get it right; years ago people correctly predicted watches that could communicate... but we still don't have flying cars, matter transporters or a cure for cancer. On the plus side we haven't been invaded by Mars and the world didn't starve when the population crossed the 1 billion mark.

I can only talk to what Machine Learning means in the present. It seems to me that the important question is the matter of will (as in volition). In order to have a will, you need to have an understanding, or sentience (I am diving into philosophy now).

Current AI is not developing systems that have a "will" of their own, or any sort of sentience. We are not even going in that direction (at least not in any project that has had any real success).

What we are doing is building "models". The models are mathematical entities, basically sets of numbers, that define parameters for a set of rules and goals that are established by human beings. Machine learning means that the system can adjust the number in such a way that it performs better .... meaning meeting the human-defined goals in a quantifiable way.

I might have to clarify this better. When we make these AI systems, what we are trying to do is build models that will give a result. In my company, we say we are trying to "understand" humans... it is kind of misleading, the computer isn't doing any understanding. If ask Siri to tell you the weather, Siri has no "understanding" (in the human sense) of what "weather is". It looks in its model for a defined behavior and gives you what it is programmed (by a human) as the proper response.

That is why these paranoid stories about the rise of sentient machines sounds a little ridiculous when you consider the present direction of the field of AI. Not only are we exceptionally far from the technological ability to build a sentient machine (i.e. one with a will of its own), we aren't even going in that direction.









Krumple
 
  1  
Mon 18 Dec, 2017 07:20 pm
@maxdancona,
maxdancona wrote:
Current AI is not developing systems that have a "will" of their own, or any sort of sentience. We are not even going in that direction (at least not in any project that has had any real success).


I'm glad you brought up will. But I also have different ideas dealing with this "problem".

In the movie Ex Machina she leaves the laboratory at the end of the film. But she needs to be powered up. And from the film it seems she requires a lot of power. I can't imagine that an AI that knows it requires power/electricity would want to wander very far from the source of power it requires.

It would be like you just wandering off into the forest without bringing any food with you.

How would she have known she could acquire power outside the lab? She didn't seem to be concerned over this issue.

But here is where we get into Will.

But before I actually get into that, I just want to say I feel like we are at the point just before the Write Brothers started to discover how, lift and drag work for powered flight. If you examine before the understanding of lift was understood, there were dozens and dozens of inventions that attempted to mimic birds. They all failed. It seems logical that if a bird can fly, why can't we simply mimic a bird and it will fly? We couldn't understand why it kept failing.

The underline principal was not known or understood. Which is lift. How air pressure is the key to flight.

I feel we are doing the exact same thing with AI, we have not understood the proper principal yet. We are trying to mimic the way a human "thinks" or the way a human "learns" or the human "will". Which is why we keep failing.

I honestly feel that when the breakthrough occurs, it will be due to someone abandoning the human "model" completely and utterly 100%. Then when it happens people will be asking why was it so difficult? It seems so obvious. Why did it take so long?

So with all that said. Getting back to Will.

I do think Will is the easiest aspect to actually put into a machine. I know, I know there is a lot that says just the opposite. So how can I consider it easy when other people claim it's either impossible or very much a challenge.

The will is just simply a motivation with outcome. That's it. Not more complex than that. I think we like to think the Will is some complex and "magical" thing because we have an ego with it. That we are something special in the universe because we have a Will.

Most of our desires are for survival and ease. We prefer to do very little with the biggest pay off. This sets up our motivation for outcome. A reoccurring pattern, day in, day out.

Get food.
How to obtain food?
Grow it,
Steal it,
Forage for it,
Buy it,
ect.

What would be the desire for an AI robot?

Power = food essentially. Without power the AI can't function. It is a machine. It will need renewed sources of power. This is one concern it would have if it feels self preservation is important.

How to obtain power?
Harness it,
Steal it,
Salvage for it, (collect batteries and convert their stored power)
Buy it,
ect.

You see they have a similar set of perimeters that we do. Each "solution" has with it a price/cost and a resultant impact.

If they were to steal, stealing has negative connotations. Since it is taking what doesn't actually belong to you or without exchanging for agreeable exchange.

This is where the cost/benefit analysis comes in which is nothing more than the weighing of economic and moral issues.

The Will is based on what you value. You need something, what are you willing to do, to get what you need?
maxdancona
 
  1  
Mon 18 Dec, 2017 07:38 pm
@Krumple,
Let me start with my most important point. You are anthropomorphizing these software systems in a way that I am not. I have a bit of a different because I see under the hood, so to speak. I know the way that the current technology works. The results are impressive. But it isn't anything close to human sentience... and it doesn't even try to be.

When you use words like "desire" and "motivation" to describes these machines... you are crossing the line from the current reality into science fiction. The current systems are worried about statistical models producing the correct result. There is no attempt at human emotion (other than to get results predicted by human emotion which isn't the same).

Let's take Wright Brothers example. The plane was designed by humans to produce a result. The airplane was designed to fly and it flew. Airplanes don't have any "desire" to fly or any "motivation" to fly. They just fly.

Likewise modern AI systems (the ones that actually work and are doing real work) are designed as statistical models designed by humans to give responses. They don't "desire" to give these responses, or have any "motivation" to give these responses. They just do what they were designed to do, just like the airplane, your car, or even the light switch in your bedroom.

You might ask why some human couldn't design a modern AI to actually have human emotions such as "desire" or "motivation". The answer is that while the technology exists to design a system that can give a correct response based on human expectations (whether that be a "go" move, or understanding what you say to "siri"), we don't have the current technology to create machines that have actual emotions or understanding .

Can I say that we will never have this technology... of course I can't. But I can say that for now, an AI like that in Ex Machina is purely in the world of Science Fiction and from the perspective of someone who works in the field of AI (and I think most of my peers would agree) this is very far off.


Krumple
 
  1  
Mon 18 Dec, 2017 08:26 pm
@maxdancona,
maxdancona wrote:
When you use words like "desire" and "motivation" to describes these machines...


Now I am kind of skeptical that you actually work with anything related to AI. Because I thought for sure you would have understood when I use terms like desire and motivation, I am not really referring the the human aspects of those words. We just don't have better terms, or at least I am too lazy to quickly come up with better terms. I didn't want the meaning lost so I used them, perhaps in error since that is all you focused on.

Motivation here is a necessity that the AI needs to continue existing.

If it's continued existence is it's desire. Meaning if it's continued functioning is part of it's task to fulfill it's "needs" such as electrical power. Then by all means that is a desire, meaning, it has a task, a goal, obtain electrical power. That can be said to be a "desire".

Motivation is very similar. The motive behind the selected option to obtain the goal. Is it one of simplicity, ease, or availability? Does the AI try to find the most efficient solution or does it weigh options based on a criteria of values? This is motivation. Is there a requirement that it must consider before it can carry out a task? This is motivation.

maxdancona
 
  1  
Mon 18 Dec, 2017 08:47 pm
@Krumple,
It doesn't matter if you are skeptical or not (that feels like a bit of a swipe Wink ). I do work in AI. I am trying to make a point. If a truly sentient machine were possible, it would be able to have "desire".

The lamp on my desk is designed to fulfill a goal. It has the ability to get power, and it "knows" that it is supposed to light up when I pull on the little cord. There is a big difference between mundane machines fulfilling a task, and a sentient machine that can choose its own tasks.

The terminology may be difficult. But I am trying to make an important point. AI system, as they exist with today's technologies, don't "try to find the most efficent systems" any more than my lamp "tries to light up the room". These are machines that go through a very specific process as designed by humans to fulfill a goal.

If you came and worked on my team for a while, you would understand the process is very human centric even though there is automatic machine learning. We set up machine to process a huge amount of data to come up with a set of parameters that give us the results that we (and our users) want. But the process involves us setting up the framework that define the parameters. We design the system, and upgrade it, by humans getting ideas and then testing the. The Machine learning is an impressive amount of data analysis that couldn't be done by humans... but the algorithms that do this analysis and develop the models are all designed by humans.

Modern AI systems are impressive, but they fall into the first category. They are just running a set of rules that are set by humans. These rules run on a second level of abstraction... they are rules that define parameters that can work like another set of rules. But, they are closer conceptually to light bulbs than to human brains.

They aren't sentient.

Does a desk lamp have motivation?
 

Related Topics

Cleverbot - Discussion by Brandon9000
What are the odds... - Discussion by tsarstepan
The Future of Artificial Intelligence - Discussion by Brandon9000
Can you spot the likely AI Bot Poster? - Question by tsarstepan
AI in Medicine - Discussion by rubberduckie2017
Is this Semantic Network correct? - Question by noobydoods
 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/20/2024 at 02:25:32