Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would say no to driverless cars even if it was 1 life lost per decade globally if it took us thousands of driverless car deaths to perfect the technology.

Very interesting point of view - in your eyes, one death due to an autonomous vehicle is worse than a million deaths due to human drivers. Because reasons.

You're saying that unless we can reduce deaths to absolutely zero, there's no point in reducing them at all. Do you also live the rest of your life by these beliefs? Why bother making money if you're not going to be the richest person in the world? Why bother exercising if you're not going to be the fittest person in the world? Why have hospitals when they can't save 100% of patients?

One of my biggest fears is that legislators will have such viewpoints as yours. But so far it's looking pretty good. Most legislators seem to agree that reducing death is a sensible course of action, and autonomous vehicles are a guaranteed way to reduce death.

Only seems to be armchair warriors who are terrified of technology that would rather drive a vehicle themselves even it it were statistically proven to be a billion times more dangerous.

It's worth pointing out that when a computer makes a mistake, the mistake gets fixed, and no computer ever makes that mistake again. The same can't be said for human drivers who aren't able to learn from other people's mistakes.

But here I am with my silly opinion that less people dying is a good thing.
 
Without judging this accident...

Machine watching human and waiting to step in when human errs -- good idea. (Machines never get bored.)
Human watching machine and waiting to step in when machine errs -- bad idea. (Humans have a poor attention span.)
 
Last edited:
This brings up a topic that I find really interesting, and it's about how the "AI" for self driving cars works. In a scenario where the car cannot avoid hitting a pedestrian, or crashing into a wall (or maybe even another vehicle), which does the car choose? Does it save the driver? Or does it save the pedestrian? What if the car has multiple passengers and it's a single person on the sidewalk? Or if the sidewalk has quite a few people on it and the car only has the driver? Somewhere, someone has to account for these scenarios.
Also what if the passenger is a VIP, governor, president, etc whose life society deems more valuable than the average joe (so much for “we are all the same” :/)
 
This brings up a topic that I find really interesting, and it's about how the "AI" for self driving cars works. In a scenario where the car cannot avoid hitting a pedestrian, or crashing into a wall (or maybe even another vehicle), which does the car choose? Does it save the driver? Or does it save the pedestrian? What if the car has multiple passengers and it's a single person on the sidewalk? Or if the sidewalk has quite a few people on it and the car only has the driver? Somewhere, someone has to account for these scenarios.

The most a human can do in an emergency with 0.5 seconds to react is to slam on the breaks and hold on for dear life. I daresay any human has actually had the time to calculate the value of life microseconds before a fatal accident. So regardless of what the AI is programmed to do, any decision the AI makes is arguably going to be better than what we have now which is basically "hope for the best".
 
This going to show that even the most advanced tech cannot account for human stupidity. The case will rest on what the outcome would have been if the pedestrian in the same conditions was in collision with a human driven car.
 
Let's take the worst scenario, she jumped to the road out of nowhere. The car should already be prepared to act accordingly. Perhaps the car didn't have enough space to stop but maybe it would've been better to turn to the sides, perhaps crashing another car sending both to the hospital but none to the cementery. The car has the power to process all likely scenarios in miliseconds (if not microseconds).

No idea how fast the car was driving, but assume 20-25 mph (??), the AI may be able to react in milliseconds, but a huge chunk of metal is not going to stop in millimetres/centimetres... What's the stopping distance of a car going 20mph (12 metres according to Google... though I expect it's less than that in real-world - but even 2-3 metres stopping distance is still going to hit someone if they suddenly appear in front of a car).
If an autonomous car can, if necessary, stop on a dime then pedestrian accidents may occur less frequently... but... that kind of stopping wouldn't be fun for any passenger in the car.

[Edit: Report states 40mph...]
 
Last edited:
It boggles my mind that they are allowed to conduct testing of these unmanned "alpha / beta" cars in the public in the US.
Nowhere else in the world would they risk beta-testing two tonnes of metal against it's citizens, with the hopes that nothing goes wrong.
 
  • Like
Reactions: rafark and ssgbryan
I would say I am terrified to live in a world where we are romantically celebrating technology to a level we no longer care about the dark underbelly.

But I guess technology has desensitized us to humanity so we no longer care if people die? As long as we have new gadgets?

I am not anti driverless cars or technology. I prefer a slower and safer road for developing and testing this stuff...

These cars should be in real world cities when they have undergone serious testing and regulatory frameworks have been established including independent auditing to ensure the car companies aren’t hiding bugs and issues.

This is incredibly naive. There is no perfect safe world. ~3,287 people died on the road every day, how many of them in autonomous cars? Let's not even ignore the fact that Uber car was not found at fault and the police have stated that the accident would have been impossible to prevent in any case. You can't change the laws of physics if someone steps onto the road and a car is only a few feet away.

This accident had absolutely nothing to do with the fact that the car was autonomous (and had a human driver behind the wheel by the way by the way).
 
  • Like
Reactions: kironin
Very interesting point of view - in your eyes, one death due to an autonomous vehicle is worse than a million deaths due to human drivers. Because reasons.

You're saying that unless we can reduce deaths to absolutely zero, there's no point in reducing them at all. Do you also live the rest of your life by these beliefs? Why bother making money if you're not going to be the richest person in the world? Why bother exercising if you're not going to be the fittest person in the world? Why have hospitals when they can't save 100% of patients?

No, you are taking what I said to the extreme. I followed my statement with an analogy to medicine for a reason. A requirement of perfection would be impossible.

We don’t test medicine on people without informed consent and without having conducted very good studies before exposing people to the experimental drugs. We are doing the exact opposite with this technology. The comment I responded to also is in support of the current model.

In my view we need to come up with regulations defining what these cars can test on public roads and what tests need to be successfully passed before they do. We should also have a way of informing people of these vehicles.

My point is simply that if we have to kill countless people to “train” the computers and these people are innocent members of the public whom in no way opted into this training then that is an unacceptable way to develop this technology.

We can’t look at a person such as who died here and say “well she died, but at least she trained the algorithm”. Human life matters. The same if they are someone 20 years from now benefiting from the safety improvements or someone today while we develop it.

It also isn’t enough to say “people kill more people so we don’t need to care”. Especially knowing that corporations will place profits over people’s lives such as what GM did with its ignition switch problems.
 
Local TV also reported the pedestrian was wearing earbuds.

No way it was a software error, the pedestrian had a brain fart, didn’t follow the applicable traffic laws and stepped out in the road. Splat.

Pedestrian added to the body of knowledge of dumbass things pedestrians do. Programmers will make changes to try and mitigate the effects of such actions in the future. You’ve got to start someplace and that’s by assuming vehicles and pedestrians will follow the laws. Then you try and add in anticipation subroutines as you get experience with the things ‘real’ drivers and pedestrians routinely do. Then lastly, you start working on the dumbass things as you build a body of knowledge.

The village idiot can figure out where this incident falls.
 
Perhaps these driverless cars should be equipped with "extra cushioned bumpers" all the way around the car, so as to soften an impact against other vehicles or pedestrians if they encounter collisions.
 
As such, another ethical component must be considered -- especially if vehicles are to be fully 'autonomous' as we still haven't figured out who would be responsible.

People keep saying this, but we've known for years already what's going to happen WRT responsibility for autonomous car crashes--multiple automakers have already said they would accept the liability. The auto insurance industry is collectively freaking out because of a huge lost revenue stream.

Despite this crash, the autonomous car industry is far ahead of where the average person thinks they are. It's coming--soon.

https://www.autoblog.com/2015/10/07/volvo-accept-liability-self-driving-car-crashes/
 
It boggles my mind that they are allowed to conduct testing of these unmanned "alpha / beta" cars in the public in the US.
Nowhere else in the world would they risk beta-testing two tonnes of metal against it's citizens, with the hopes that nothing goes wrong.
They're not "unmanned" a little bit of reading goes a long way.
 
Meanwhile (in the US only) 15 pedestrians will be killed today by negligent human drives. 15 more will die tomorrow, 15 died yesterday and 15 die everyday. Why does no one care about that?

Exactly this. This is obviously a horrible thing to have happened. But many people will instinctively just think that it was the car's fault, and self driving cars are terrible and should be banned immediately.

But of course you can't possibly compare the stats between driven cars, and self driven cars based on one accident. Especially if early reports are suggesting it was maybe the unfortunate victim who was at fault, and the accident likely would have happened if it was a regular car.
[doublepost=1521554223][/doublepost]
It boggles my mind that they are allowed to conduct testing of these unmanned "alpha / beta" cars in the public in the US.
Nowhere else in the world would they risk beta-testing two tonnes of metal against it's citizens, with the hopes that nothing goes wrong.

It boggles my mind that so many people are so quick to post on internet forums without first reading the story they want to comment on.
[doublepost=1521554535][/doublepost]
No, you are taking what I said to the extreme. I followed my statement with an analogy to medicine for a reason. A requirement of perfection would be impossible.

We don’t test medicine on people without informed consent and without having conducted very good studies before exposing people to the experimental drugs. We are doing the exact opposite with this technology. The comment I responded to also is in support of the current model.

In my view we need to come up with regulations defining what these cars can test on public roads and what tests need to be successfully passed before they do. We should also have a way of informing people of these vehicles.

My point is simply that if we have to kill countless people to “train” the computers and these people are innocent members of the public whom in no way opted into this training then that is an unacceptable way to develop this technology.

We can’t look at a person such as who died here and say “well she died, but at least she trained the algorithm”. Human life matters. The same if they are someone 20 years from now benefiting from the safety improvements or someone today while we develop it.

It also isn’t enough to say “people kill more people so we don’t need to care”. Especially knowing that corporations will place profits over people’s lives such as what GM did with its ignition switch problems.

Nobody is saying we don't need to care. Just that its far from been demonstrated that the accident happened because it was a self driving car, as opposed to a regular car. There comes a point where you have to accept that the risk is low enough to be viable, and presumably these things have been extensively and rigorously tested on private roads first.
 
Last edited:
  • Like
Reactions: tooloud10
There are roughly 300 million cars registered in the US and maybe 100 of the are driverless vehicles, so your conclusion that the rates of pedestrian deaths are the same for both is illogical and absurd. That’s why no one cares.
And yet we all understand their comment, Literal Larry.
 
Without judging this accident...

Machine watching human and waiting to step in when human errs -- good idea. (Machines never get bored.)
Human watching machine and waiting to step in when machine errs -- bad idea. (Humans have a poor attention span.)

Who would wanna watch grass grow.

This will probably work better in a lower populated area as opposed to a larger city such as NYC.

At some point you gotta test it on populated areas don't you ? Otherwise how else would everything be tested..

"Failure : car *and* drivers didn't take action*

You can help technology, but u can't help humans if they are too late.
 
If autonomous cars rely on visible light to see obstructions then there are going to be a lot more accidents and deaths going forward. I was t-boned in an intersection in broad daylight due to light blinding the other driver, camera lenses are far easier to blind.
 
Very sad. I have to wonder why the human behind the wheel failed to take over in this situation though. That's the point of testing with a human behind the wheel isn't it?
Humans are behind the wheel to take over when the vehicle fails or could possibly fail due to various road and traffic conditions. If someone "jumps" out in front of a moving vehicle, there is no solution for that. The best case is that a fully autonomous vehicle attempts to brake at least. It's still unclear why this car did not attempt to brake but it could be due to the sudden speed of the pedestrian or a true bug in the car's software. Either way, it only bolsters the need for autonomous vehicles.
 
If autonomous cars rely on visible light to see obstructions then there are going to be a lot more accidents and deaths going forward. I was t-boned in an intersection in broad daylight due to light blinding the other driver, camera lenses are far easier to blind.

ok,, then don't drive at night ? humans have difficult seeing at night as well... so both would have the same places as "can't see"

at the moment, it looks like humans still win, because humans would react and slow down if they can't see... What would an anonymous car do ? Maintain speed regardless, so human driver sill there..

I would still say there should be either one way or the other..... if u start being the cross-fire of human must take control at 'certain times' when the car can't do something, how will we know when that 'certain time' happen until its too late?
 
If autonomous cars rely on visible light to see obstructions then there are going to be a lot more accidents and deaths going forward. I was t-boned in an intersection in broad daylight due to light blinding the other driver, camera lenses are far easier to blind.
I believe on Tesla relies completely on optical conditions and cameras while all other autonomous vehicles use LIDAR primarily. But even Tesla's many cameras beat our only 2 eyes in the front of our head.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.