Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm surprised no one has mentioned one big advantage for self driving cars, you could safely text and drive.
 
Yeah, but what is a road.

That's a good one. I've got some more.

What is a driver?
What is self?

If your friend drives you home from the pub, is that not a self-driving car? How do we know that our "friend" is not a robot? Or a figment of our own imagination???

Am I really here??? Who's typing this??? What am I???
 
That's a good one. I've got some more.

What is a driver?
What is self?

I think you missed the point. You can give rules to a computer to separate cats from dogs in images, but what is a cat and what is a dog. That's the hard problem, not defining the rules.
 
I think you missed the point. You can give rules to a computer to separate cats from dogs in images, but what is a cat and what is a dog. That's the hard problem, not defining the rules.

Actually, the point is it doesn't matter. If it's a cat, or a dog, or a small child, or another car, it doesn't matter. The car needs to avoid it.
 
Actually, the point is it doesn't matter. If it's a cat, or a dog, or a small child, or another car, it doesn't matter. The car needs to avoid it.

What about a small frog hopping across the street? Avoid it or splatter it?
 
Yes, but you don't need to avoid the question.

You don't need to do anything in life. But ignoring the philosophical stuff...

You can ignore certain things if they aren't relevant. If it's a black box (which a self-driving car's software would be) the end always justifies the means. (The end being "it works", and the means being what sensors it uses, and what processing it does.)

----------

What about a small frog hopping across the street? Avoid it or splatter it?

Do you want Greenpeace picketing YOUR house???

But seriously, if it's big enough to be detected by radar, it's probably worth stopping. If it's not, it's probably not.
 
You don't need to do anything in life. But ignoring the philosophical stuff...

It's not philosophical, it's central to the problem. You said that at least these cars will be aware of the traffic rules, I agree, but pointed out that potential mistakes then will be of a different nature. Basically, while computers are very good to remember and follow rules, they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.
 
The car knowing the traffic rules is hardly any help when the hard problem is to make sense of visual data and navigate according to it.

Yes, but so what? It's hard, but not impossible. A self-driving car doesn't have to be better than a human driver in every single circumstance, it just has to be good enough in every single circumstance that it is legal to operate it in self-driving mode.
 
Yes, but so what? It's hard, but not impossible. A self-driving car doesn't have to be better than a human driver in every single circumstance, it just has to be good enough in every single circumstance that it is legal to operate it in self-driving mode.

Lets go back to the original quote, it was said that all rules can be given, just like a computer game. In a computer game all possible outcomes is known and the virtual world has constraints and is described in a non-messy way that is fully understood, it's virtual.
 
It's not philosophical, it's central to the problem. You said that at least these cars will be aware of the traffic rules, I agree, but pointed out that potential mistakes then will be of a different nature. Basically, while computers are very good to remember and follow rules, they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.

Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others. It is therefore up to the government to ban them all (the Luddite approach), accept the net benefit and make them legal for general use, or (most likely) try to minimise collisions through enforcement of when and where you can operate in self-driving mode. Like they do today with car headlights, roadworks speed limits, or driving too fast in the rain.
 
Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.

Why.

It is therefore up to the government to ban them all (the Luddite approach), accept the net benefit and make them legal for general use, or (most likely) try to minimise collisions through enforcement of when and where you can operate in self-driving mode. Like they do today with car headlights, roadworks speed limits, or driving too fast in the rain.

Nice strawman, but I think you agree that they need to be proven to be safe before they are used.
 
Lets go back to the original quote, it was said that all rules can be given, just like a computer game. In a computer game all possible outcomes is known and the virtual world has constraints and is described in a non-messy way that is fully understood, it's virtual.

And I don't see why that quote is relevant. Both computer drivers and human drivers operate in the same chaotic world, and have to make the same decisions. Both however do so in different ways, and have different strengths and weaknesses. But just because they are different, doesn't mean the new way is inferior. If the new way has a higher accident rate in some specific scenarios, and a far lower accident rate everywhere else, it surely has to be considered superior.

I would also expect that any areas that may exist that a self-driving car would perform worse, would most likely involve low speed collisions. And the self-driving car would be very unlikely to be at-fault.

----------


Superior detection range, superior reaction time. Lack of exhaustion or distraction.

Nice strawman, but I think you agree that they need to be proven to be safe before they are used.

I never said they wouldn't. But some people around here seem oblivious to software verification and validation. Or think that because computer vision software has extreme difficulty distinguishing between a cat and a dog, means that a computer can't use a laser-scan radar to detect that there is an object on the road, and appropriately avoid it.
 
And I don't see why that quote is relevant.

Really, that's what started the discussion and set the topic. How is that not relevant?

Both computer drivers and human drivers operate in the same chaotic world, and have to make the same decisions. Both however do so in different ways, and have different strengths and weaknesses. But just because they are different, doesn't mean the new way is inferior.

I haven't said that the new way is inferior, I don't know we'll see.

----------

Superior detection range, superior reaction time. Lack of exhaustion or distraction.

This was the premise and your conclusion:

they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.

Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.

None of the points you bring up above helps if the world is interpreted incorrectly. I don't disagree with the points, but they are not related to what I said, it's like that quote about computer's allowing us to make mistakes faster.

I never said they wouldn't. But some people around here seem oblivious to software verification and validation. Or think that because computer vision software has extreme difficulty distinguishing between a cat and a dog, means that a computer can't use a laser-scan radar to detect that there is an object on the road, and appropriately avoid it.

I'm not oblivious to software verification and validation...
 
So, if you're driving and suddenly a child goes into the street following a ball, and on the other side there's a biker which is acting properly, and the car has to evaluate to hit the child and avoid the biker or hit the biker and avoid the child, considering the biker wears a helmet and has much more % of surviving success, who is the car hitting? And what about the insurance? Who will be guilty? Just a simple situation... And what about using them as weapons?

I guarantee a computer can process all the available alternatives faster than a person and find the solution most likely to not hit anything. And even more likely, the radar would notice the child running toward the street long before a person would, reducing speed and avoiding the situation altogether.
 
Really, that's what started the discussion and set the topic. How is that not relevant?

I ignored it originally. And I already said why I think the quote is not relevant.

I haven't said that the new way is inferior, I don't know we'll see.

Certainly we will. But I feel that there are certain people in this thread who don't know what they are talking about, using pointless thought experiments (which many humans would fail anyway) to justify their view that self-driving cars are child-murderers. (slight exaggeration)

But seriously, the idea of using a fictional artificial intelligence (H.A.L.) to justify a distrust of computers is insane.
 
I ignored it originally. And I already said why I think the quote is not relevant.

Really? I'm discussing a traditional programming model vs the different approaches used in say machine learning, how they differ and why it's not a simple problem. You seem to discuss something else entirely and seems bent on convincing everyone that self driving cars are awesome.

But seriously, the idea of using a fictional artificial intelligence (H.A.L.) to justify a distrust of computers is insane.

I haven't, someone made a reference as a joke on the last page.
 
This was the premise and your conclusion:

they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.

Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.

Actually the premise I was referring to was:
...but pointed out that potential mistakes then will be of a different nature.

"Which means" was the wrong thing for me to say. "Which suggests" would have been better.

None of the points you bring up above helps if the world is interpreted incorrectly. I don't disagree with the points, but they are not related to what I said, it's like that quote about computer's allowing us to make mistakes faster.

People can interpret the world incorrectly too. This usually results in car crashes and injury as well. The difference is that a computer has the capability of re-interpreting the world, and re-evaluating the best course of action much more frequently than a human.
 
People can interpret the world incorrectly too. This usually results in car crashes and injury as well. The difference is that a computer has the capability of re-interpreting the world, and re-evaluating the best course of action much more frequently than a human.

Yes, but the human brain is vastly better at this, which is why it's an active area of research that has been going on for a long time.
 
Really? I'm discussing a traditional programming model vs the different approaches used in say machine learning, how they differ and why it's not a simple problem. You seem to discuss something else entirely and seems bent on convincing everyone that self driving cars are awesome.

Yes, we are discussing the benefits and limitations of computer vision, object detection and path planning. We are not discussing whether because life is not a computer game, and some outcomes of actions are unknown, that somehow a computer will never be as good as a human at driving. Because obviously as the human is also operating in this same unknown environment, the statement is flawed, and is not really worth discussing.

However whether we are discussing a statement not worth discussing is apparently worthy of discussion.

----------

Yes, but the human brain is vastly better at this, which is why it's an active area of research that has been going on for a long time.

Indeed. I happen to know this from experience. But my argument, if I was to put it in a sentence, is that this drawback is adequately countered by the significant benefits from reaction time, radar sensors, lack of exhaustion, distraction and frustration and dynamic modelling that a computer is capable of.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.