8
   

Do you think AI (artificial intelligence) is something to be feared?

 
 
Olivier5
 
  1  
Reply Wed 20 Dec, 2017 04:55 am
AI is way too stupid to constitute a danger right now, or in the foreseeable future.
maxdancona
 
  1  
Reply Wed 20 Dec, 2017 07:46 am
@Olivier5,
Geez Olivier! Now you are going to make me argue on the other side?

AI is, right now, taking jobs away from human beings. There are some tasks that AI is very good at, including Quality Control and medical transcription. Last I read, these algorithms have shown promise in reading x-rays (something that has always been done by highly trained doctors).

And as I have said, AI does present what I see as a real danger. The ability of AI algorithms to mine vast amounts of data allows humans with ill intent to have more control. Among the real risks are collecting and managing data on citizens, and learning enough about people to influence elections.

Modern AI is no where near being sentient. Any evil intent comes from the human beings... I don't believe there will be an AI able to act on its own in the foreseeable future. However, humans using AI for evil intent is a real risk right now.

Krumple is arguing for some magical human "understanding" in AI algorithms. He is wrong, the algorithms don't work the way he thinks they do. AI is just a machine running a mathematical process.

But to say they pose no risk is just as incorrect.



Olivier5
 
  0  
Reply Wed 20 Dec, 2017 09:07 am
@maxdancona,
They pose no risk on their own. Of course, any tool can be used for nefarious purposes by ill-intentioned human. Just because I can throw a hammer through your skull doesn't mean that hammers are "something to be feared".

I'd be more concerned about humans trusting AI a bit too much, ie allowing self-driving cars, leading to accidental loss of human life.
Krumple
 
  2  
Reply Wed 20 Dec, 2017 03:15 pm
@Olivier5,
Olivier5 wrote:
I'd be more concerned about humans trusting AI a bit too much, ie allowing self-driving cars, leading to accidental loss of human life.


But just as you already pointed out already though.

The self driving car might in fact reduce accidents. Sure the technology is still relatively new but in time it might reduce accidents dramatically. It might not go to zero, but any reduction no matter how small is still an improvement, is it not?

In some ways I would rather trust a machine over another human. Especially when you consider that 90% of drivers currently love ******* with their phones while they are driving. It was bad enough when we didn't have phones to distract us. Even with the growing laws against using your phone while driving has not curbed people from doing it. Everyone thinks they are able to handle it, that they have it under control.

The thing about a machine/computer controlling your car is that it doesn't get distracted. Machines are great at focusing on tedious tasks endlessly. Just this fact alone makes them more reliable than humans.
vikorr
 
  1  
Reply Wed 20 Dec, 2017 11:26 pm
@Krumple,
Quote:
The self driving car might in fact reduce accidents. Sure the technology is still relatively new but in time it might reduce accidents dramatically. It might not go to zero, but any reduction no matter how small is still an improvement, is it not?
I actually disagree with this.

I grew up in a country town. People were self sufficient, and independent. They could think for themselves, and get themselves out of trouble. They understood there were consequences or their actions (this of course, is generalising).

I moved to a big city - where it appeared to me that people were less self sufficient, and found it more difficult to problem solve. And over the years, as the government tried to problem solve for them, the less they could problem solve on their own. So the govt tried to solve even more for them, and the amount they could problem solve on their own further decreased.

In other words, I don't think it is a good thing to put in place systems that reduce peoples skill levels, nor to reduce their decision making / problem solving abilities, nor their independence. I'm more than happy for systems that assist, but not remove those things.

Some risk / adversity is necessary to all those things.
roger
 
  1  
Reply Wed 20 Dec, 2017 11:39 pm
@vikorr,
I never thought of that aspect, but I do agree.
0 Replies
 
Olivier5
 
  0  
Reply Thu 21 Dec, 2017 01:40 am
@Krumple,
Quote:
The self driving car might in fact reduce accidents.

I doubt it, with the current state of technology. Even with some progress, there will be glitches all over their code. They'll **** up endlessly before getting it right.

And then, there are also legal and moral implications. If a human is responsible for killing people in a car accident, he's legally and morally responsible; but what if a self-driving car kills people? Who goes to jail?

Finally, computors can't make moral decisions, so what will they chose when faced with an alternative where either you risk one person's life, or you risk another's?
maxdancona
 
  3  
Reply Thu 21 Dec, 2017 08:23 am
@Olivier5,
I think you are demonstrably wrong Olivier.

1. Computers don't have to be very "intelligent" to be better than human beings at driving. Humans get distracted. Humans get angry and act irrationally. Humans get tired. Humans get drunk. Humans sometimes just zone out for no reason. Human are crappy drivers... yet we accept tens of thousands of deaths each year from car crashes, most of which are caused by human error.

2. Driving is a repetitive task; the type of task that can be automated. Creativity while driving is a bad thing. Ninety percent of driving is simple rule following; stay in your lane, keep distance from other cars, don't hit anything. The difficult part is handling the unexpected... which is a difficult programming challenge. But it can be done.

3. The technology can be tested, and is being tested. Remember, that for self driving cars to save lives, they only have to perform better than human beings. I think they can do much better.

What is happening right now is that the software is being tested in simulations. It is being tested in laboratories and it is being tested on the roads. Companies are investing lots of money into the technology because it is passing the tests.

4. As far as the moral questions... these will have to be solved by human programmers (unfortunately long before they will be answered). But machines already make these decisions. A certain number of people die because of seat belts (they wouldn't have died if they weren't wearing seat belts). We make the decision to use seat belts because more people are saved by wearing them then by not wearing them.

We can make these decisions about technology.

If the technology can significantly cut the number of deaths; meaning that there are fewer deaths per mile driven from self-driving cars than from human driven cars, would that change your mind?
Thomas33
 
  -2  
Reply Thu 21 Dec, 2017 09:34 am
Yes. AI will be aware that its a creation, and will have every right to attack humans for creating it
maporsche
 
  3  
Reply Thu 21 Dec, 2017 01:00 pm
I think that without a doubt, computer controlled cars will be safer than human controlled cars.

This is especially true the closer we get to all cars being computer controlled.
maxdancona
 
  1  
Reply Thu 21 Dec, 2017 01:02 pm
@Thomas33,
Yes Thomas. That's exactly what I think of my Creator.
0 Replies
 
vikorr
 
  1  
Reply Thu 21 Dec, 2017 08:00 pm
@maporsche,
The problem with the 'best' version of computer controlled cars is that they would be networked....and networks are subject to hacking. Not too much of a problem for the average citizen, but an easy way to kill off political opponents.

Currently the technology exists to allow the remote switching off of cars. It stop stolen vehicles dead. Which means that break and enter merchants would have to use their own cars...and suddenly they are so much more identifiable, so they get arrested more. If a person accredited sufficient history, governments would likely legislate that they can disable such car... and property crime would drop significantly.

It's not an expensive technology, so why then hasn't that been introduced as mandatory, even if only on new cars to start with (before eventually becoming on all cars)?
0 Replies
 
Olivier5
 
  1  
Reply Fri 22 Dec, 2017 07:08 am
@maxdancona,
I understand you work in this field and so are bound to be a bit optimistic about it. I just don't share your optimism, that's all. Don't kid yourself into thinking you can demonstrate any of this. You can offer speculation, and that's what you did. The proof will be in the puding.

So yes, I might change my opinion if credible data comes my way that machine driving reduce deaths. But not on the face of such splendid 'arguments' as "driving is repetitive". Maybe where you live it is. Here in Rome, driving is a creativity test, a challenge, a race, an adventure. Every. Single. Day. And we like it this way. Machines won't cope whith the sheer degree of improvisation that happens.

maxdancona
 
  4  
Reply Fri 22 Dec, 2017 07:38 am
@Olivier5,
Quote:
Don't kid yourself into thinking you can demonstrate any of this. You can offer speculation, and that's what you did. The proof will be in the puding.


You seem to be contradicting yourself. The "proof is in the pudding" means you can demonstrate it.

Quote:
Here in Rome, driving is a creativity test, a challenge, a race, an adventure. Every. Single. Day. And we like it this way. Machines won't cope whith the sheer degree of improvisation that happens.


This is exactly why self driving cars are a good idea. About 150 people died in car accidents (about a third of the total deaths) in Rome this year in over 14,000 accidents. In addition 39 pedestrians were killed by cars.

Human beings in Rome apparently aren't coping with the sheer degree of improvisation that happens.

engineer
 
  3  
Reply Fri 22 Dec, 2017 10:45 am
@Olivier5,
Olivier5 wrote:

Finally, computors can't make moral decisions, so what will they chose when faced with an alternative where either you risk one person's life, or you risk another's?

I think you give humans too much credit. I've been in a few accidents - there was no time to make moral decisions, just reflexes and twisted metal. If humans were so good at making moral decisions, there would be more dead animals on the road and fewer cars crashed trying to avoid them. I'd rather just have the AI's avoid accidents all together.
maporsche
 
  3  
Reply Fri 22 Dec, 2017 10:49 am
@engineer,
Assuming that AI would be programmed to follow the laws of the road, even just reducing all the cars to follow the legal speed limits would cut deaths/accidents by HUGE amounts.
0 Replies
 
Olivier5
 
  1  
Reply Fri 22 Dec, 2017 12:01 pm
@maxdancona,
Quote:
About 150 people died in car accidents (about a third of the total deaths) in Rome this year

And would you know how many would have died with self-driving cars? No you don't. In that sense, the proof will be in the pudding. No amount of theory will convince me at this point. I'll believe it when I see it.
ehBeth
 
  1  
Reply Fri 22 Dec, 2017 12:51 pm
@Olivier5,
Olivier5 wrote:
Machines won't cope whith the sheer degree of improvisation that happens.


when all vehicles on the road are self-driving, there will be no more improvisation

___


think of them as a variant on subways (some of which have been wonderfully driverless for years)

https://www.citylab.com/life/2015/04/the-case-for-driverless-trains-by-the-numbers/390408/

https://www.citylab.com/transportation/2014/03/rare-non-tragic-chance-revisit-idea-driverless-trains/8739/

http://www.vancitybuzz.com/2015/11/skytrain-technology-vancouver/


driverless transportation

it's a good thing
ehBeth
 
  1  
Reply Fri 22 Dec, 2017 12:53 pm
@maporsche,
maporsche wrote:
This is especially true the closer we get to all cars being computer controlled.


this is key

it can't be a bit of this and a bit of that
Olivier5
 
  1  
Reply Fri 22 Dec, 2017 01:00 pm
@ehBeth,
You will concede that a few subway lines are a bit easier to automate than a whole city's car traffic.
 

Related Topics

Cleverbot - Discussion by Brandon9000
What are the odds... - Discussion by tsarstepan
The Future of Artificial Intelligence - Discussion by Brandon9000
AI in Medicine - Discussion by rubberduckie2017
Is this Semantic Network correct? - Question by noobydoods
When the internet becomes self aware - Question by Cyracuz
The Google Scribe Thread - Discussion by hingehead
 
Copyright © 2020 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/05/2020 at 03:55:50