I'm surprised no one has mentioned one big advantage for self driving cars, you could safely text and drive.
The problem is that you can not give all the rules, the real world is messy.
I'm surprised no one has mentioned one big advantage for self driving cars, you could safely text and drive.
You can at least give it all the official road rules. Which is more than most drivers know.
Why not just use Siri?![]()
Yeah, but what is a road.
That's a good one. I've got some more.
What is a driver?
What is self?
I think you missed the point. You can give rules to a computer to separate cats from dogs in images, but what is a cat and what is a dog. That's the hard problem, not defining the rules.
The car needs to avoid it.
Actually, the point is it doesn't matter. If it's a cat, or a dog, or a small child, or another car, it doesn't matter. The car needs to avoid it.
Yes, but you don't need to avoid the question.
What about a small frog hopping across the street? Avoid it or splatter it?
You don't need to do anything in life. But ignoring the philosophical stuff...
The car knowing the traffic rules is hardly any help when the hard problem is to make sense of visual data and navigate according to it.
Yes, but so what? It's hard, but not impossible. A self-driving car doesn't have to be better than a human driver in every single circumstance, it just has to be good enough in every single circumstance that it is legal to operate it in self-driving mode.
It's not philosophical, it's central to the problem. You said that at least these cars will be aware of the traffic rules, I agree, but pointed out that potential mistakes then will be of a different nature. Basically, while computers are very good to remember and follow rules, they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.
Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.
It is therefore up to the government to ban them all (the Luddite approach), accept the net benefit and make them legal for general use, or (most likely) try to minimise collisions through enforcement of when and where you can operate in self-driving mode. Like they do today with car headlights, roadworks speed limits, or driving too fast in the rain.
Lets go back to the original quote, it was said that all rules can be given, just like a computer game. In a computer game all possible outcomes is known and the virtual world has constraints and is described in a non-messy way that is fully understood, it's virtual.
Why.
Nice strawman, but I think you agree that they need to be proven to be safe before they are used.
And I don't see why that quote is relevant.
Both computer drivers and human drivers operate in the same chaotic world, and have to make the same decisions. Both however do so in different ways, and have different strengths and weaknesses. But just because they are different, doesn't mean the new way is inferior.
Superior detection range, superior reaction time. Lack of exhaustion or distraction.
they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.
Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.
I never said they wouldn't. But some people around here seem oblivious to software verification and validation. Or think that because computer vision software has extreme difficulty distinguishing between a cat and a dog, means that a computer can't use a laser-scan radar to detect that there is an object on the road, and appropriately avoid it.
So, if you're driving and suddenly a child goes into the street following a ball, and on the other side there's a biker which is acting properly, and the car has to evaluate to hit the child and avoid the biker or hit the biker and avoid the child, considering the biker wears a helmet and has much more % of surviving success, who is the car hitting? And what about the insurance? Who will be guilty? Just a simple situation... And what about using them as weapons?
Really, that's what started the discussion and set the topic. How is that not relevant?
I haven't said that the new way is inferior, I don't know we'll see.
If this really ends up being true I'll eat my own head!
So... one of your four cars has carburetors, right?
I ignored it originally. And I already said why I think the quote is not relevant.
But seriously, the idea of using a fictional artificial intelligence (H.A.L.) to justify a distrust of computers is insane.
This was the premise and your conclusion:
they are not very good at cognitive tasks or making sense of visual data, something that humans are much better at.
Which means that a self-driving car would be much safer in some (probably most) situations (like a highway at night) than a human driver, and less safe in others.
...but pointed out that potential mistakes then will be of a different nature.
None of the points you bring up above helps if the world is interpreted incorrectly. I don't disagree with the points, but they are not related to what I said, it's like that quote about computer's allowing us to make mistakes faster.
People can interpret the world incorrectly too. This usually results in car crashes and injury as well. The difference is that a computer has the capability of re-interpreting the world, and re-evaluating the best course of action much more frequently than a human.
Really? I'm discussing a traditional programming model vs the different approaches used in say machine learning, how they differ and why it's not a simple problem. You seem to discuss something else entirely and seems bent on convincing everyone that self driving cars are awesome.
Yes, but the human brain is vastly better at this, which is why it's an active area of research that has been going on for a long time.