Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I rather suspect that the per passenger-mile death toll of horse (think Christopher Reeve) and carriage transportation was probably higher, but I doubt that proper statistics are available.

We've accepted the toll from manually driven automobiles because we didn't have any better alternative. We're on the cusp of that now.

We look back on how dangerous some aspects of life were as recently as 50 years ago and marvel at how we ever accepted it. In another generation, this will be another such example.



I've heard the Streets of San Francisco described a lot of ways, but simple and straight-forward has never been one of them. :) Oh, it's not London, certainly, but the only thing missing from your description is snow (but lately we've had a lot of rain).

View attachment 755400
[doublepost=1521825294][/doublepost]

Airplanes have been capable of autonomously performing the entire journey from chock to chock without any human assistance for over 50 years now. They don't do that simply because the pilots are not and have never been a major cost in air travel (paying the flight crew is at most $10 of your ticket price).
Sure aircraft can be out in auto mode when in flight, but I have yet to see computers take an aircraft up or down with a pilot passively observing on a commercial airline. My guess the pilots are needed if there is an issue/malfunction aboard, and human judgment to the rescue when machines would just cause the aircraft to crash.

Which is why I say when a pilot is no longer needed, I will be very comfortable with autonomous vehicles..in the ground or in the air.
 
Sure aircraft can be out in auto mode when in flight, but I have yet to see computers take an aircraft up or down with a pilot passively observing on a commercial airline.

https://en.wikipedia.org/wiki/Autoland

The first experiments date back further than most people alive today.

I don't have a handy reference to auto-takeoff, but it's trivial by comparison.

My guess the pilots are needed if there is an issue/malfunction aboard, and human judgment to the rescue when machines would just cause the aircraft to crash.

Undoubtedly that's the thinking, however, the ratio of the number of times humans have prevented aviation disasters or lessened their severity to the number of times humans have caused or exacerbated them is below unity.

Which is why I say when a pilot is no longer needed, I will be very comfortable with autonomous vehicles..in the ground or in the air.

It won't be long.

https://mashable.com/2017/08/07/pilotless-planes-report/
 
https://en.wikipedia.org/wiki/Autoland

The first experiments date back further than most people alive today.

I don't have a handy reference to auto-takeoff, but it's trivial by comparison.



Undoubtedly that's the thinking, however, the ratio of the number of times humans have prevented aviation disasters or lessened their severity to the number of times humans have caused or exacerbated them is below unity.



It won't be long.

https://mashable.com/2017/08/07/pilotless-planes-report/
Okay. I’m waiting to see it happen, not doubting some technology exists.

All it takes for a human to avert a disaster once and save 400 lives. Although if an aircraft gets into a situation, ie hydraulics go out, I want a human flying not computers.
 
All it takes for a human to avert a disaster once and save 400 lives. Although if an aircraft gets into a situation, ie hydraulics go out, I want a human flying not computers.

How would a plane running purely on auto pilot handle a double bird strike knocking out all power to the plane and effectively rendering it a glider like what happened over the Hudson a few years back?

Computers can do 99% of the work, but if the 1% they can’t do leads to the death of a plane full of people no one would want to fly on such a plane. As everyone only has one life to lose.
 
I rather suspect that the per passenger-mile death toll of horse (think Christopher Reeve) and carriage transportation was probably higher, but I doubt that proper statistics are available.

We've accepted the toll from manually driven automobiles because we didn't have any better alternative. We're on the cusp of that now.

We look back on how dangerous some aspects of life were as recently as 50 years ago and marvel at how we ever accepted it. In another generation, this will be another such example.

But the people who do crash aren't entirely random. Even just changing who's dying from vehicle accidents is an ethical issue.

Airplanes have been capable of autonomously performing the entire journey from chock to chock without any human assistance for over 50 years now. They don't do that simply because the pilots are not and have never been a major cost in air travel (paying the flight crew is at most $10 of your ticket price).

In fact, there are at least three examples where not having a flight crew would have prevented a rogue pilot from intentionally crashing the plane and killing everyone aboard. And that's even before 2001. The single most deadly airline accident of all time was firmly and absolutely a scenario an autonomous system would have never fell into.

Automating planes in the air is orders of magnitude more straight-forward than ground transportation. They have access to all the data they need to "see".

There's a reason planes can take off, fly, and land automatically for more than half a century already -- but still won't taxi.
 
  • Like
Reactions: Analog Kid
The safety of autonomous vehicles is not in dispute by anyone who doesn't wear a tinfoil hat.

At this point, it's not so much "testing" as "proving."
I would have expected some one who lists Silicon Valley as their location to be a little more data driven... That’s the problem with the modern Valley, it’s confused magical thinking for dedication.

https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

There is about one fatality per 100 million miles for human driven cars. We have a fatality for autonomous vehicles after 10 million miles mostly in the most benign environments in the country. Not a stastically significant sample size, but not evidence of indisputable safety.

https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html

If even the best autonomous systems require human intervention every 5600 miles, it’s not safer than a human driver. Uber required intervention every 13 miles. Drunk teenagers require less intervention.


This is the problem with consumer companies getting into safety critical systems. Rushed testing, hidden data, anticipating the conclusion and under estimating the hard work in between, and confusing caution for lack of vision.

Autonomous vehicles aren’t safer. They may be eventually, but they’re not even close right now.
 
I would have expected some one who lists Silicon Valley as their location to be a little more data driven... That’s the problem with the modern Valley, it’s confused magical thinking for dedication.

https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

There is about one fatality per 100 million miles for human driven cars. We have a fatality for autonomous vehicles after 10 million miles mostly in the most benign environments in the country. Not a stastically significant sample size, but not evidence of indisputable safety.

https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html

If even the best autonomous systems require human intervention every 5600 miles, it’s not safer than a human driver. Uber required intervention every 13 miles. Drunk teenagers require less intervention.


This is the problem with consumer companies getting into safety critical systems. Rushed testing, hidden data, anticipating the conclusion and under estimating the hard work in between, and confusing caution for lack of vision.

Autonomous vehicles aren’t safer. They may be eventually, but they’re not even close right now.
Interesting. Me thinks it would be far easier to design roads specifically for autonomous vehicles. Imagine smart roads which collect data about traffic, pedestrians, hazards, lane markings, cycle lanes, crossings etc etc. And feeds it into all the vehicles on the road.
 
Interesting. Me thinks it would be far easier to design roads specifically for autonomous vehicles. Imagine smart roads which collect data about traffic, pedestrians, hazards, lane markings, cycle lanes, crossings etc etc. And feeds it into all the vehicles on the road.
There's been a lot of work done looking at these kinds of things. Hyper accurate maps, vehicle to vehicle communications, vehicle to infrastructure communications, etc. I'm far from an expert, but I don't think this is a practical solution.

The first and most obvious: expense. This would be an enormous taxpayer expense and most places have a hard enough time securing funding to keep current roads well paved. Building an entire secondary infrastructure would be a budget buster.

Then there's the chicken-and-egg of it. Nobody will build such a road if there are no vehicles to drive on it, and nobody will buy a vehicle that they can't drive anywhere. Few people will buy a vehicle they can't drive everywhere.

And would every road need such a makeover? If only main thoroughfares were instrumented, it would preclude the kind of "driverless" model that Waymo is promoting where there's not even a steering wheel in the car.

The sensor density required to monitor bicycles and pedestrians at all places at all times along the road would be pretty extreme. How would sensor failures be handled-- assuming the fault could even be detected? Would we bar autonomous traffic that needed to pass through an area with a faulty sensor? If the repair rate was anything like it is for broken streetlights and potholes, then the roads wouldn't be getting much use in the end.


There has been a lot of effort to build hyper-accurate road maps where the lanes are carefully mapped, the markings, the signage, the crossings, etc and all is kept in a database available to the vehicle. There's definitely an advantage to this context as we all know from our own commutes. You make better decisions in familiar surroundings-- but you also can make bad assumptions and worse decisions when that familiar surrounding changes. Every time I drive through a construction zone or by an accident scene I'm reminded what a bad strategy this is. When a car goes sideways, there's no mechanism to tell the rest of the roadway of the hazard. This probably falls in the category of "useful if weighted properly", but it's an enormous effort to collect and maintain.


What makes the current traffic infrastructure work is that each vehicle is piloted by a very adaptable, clever human. We're very good at making decisions based on local information even when that information is contradictory and changing. You can take a human and drop them in an entirely different vehicle, in a different country, driving on the opposite side of the road, in entirely different terrain and we adapt remarkably quickly.

To make autonomous vehicles work, I think they need to rely just as heavily on local information. There's no reason to think they won't eventually be able to. When you think that most people driving down this road are basically navigating with a flashlight, a bit of peripheral vision, and common sense it's amazing what we can do with such limited information.

Autonomous vehicles, on the other hand, have the advantages of 360° visibility and the ability to see beyond the visible spectrum and in the dark. They know where they are to a centimeter, and precisely how fast they're going. They are able to measure distances to other objects to within a few centimeters and closure rates with great accuracy. There's the potential to share information of state and intent by radio between objects and the computation power to calculate the physics with a level of precision no human could dream of, all without being victim to the distractions of screaming kids in the back seat, a bad day at work, and looking for that french fry that fell between the seat and console while changing tracks on the stereo.


So, while more sensor data (such as from the road) is always helpful, we're already well beyond what a human uses to make decisions. The hard part here is obviously being able to interpret the data, perceive the scene, and make good decisions in ambiguous situations. This is much, much harder than it sounds and is the reason we're still so far from a truly reliable technology.

Even once we're able to do that, there's a whole other layer of ethics that needs to be sorted, as has been alluded to earlier in this thread. I think the current state of the art is still having trouble at the perception level, but once that gets locked down there's still going to be the question of "kill the pedestrian or kill the driver". I don't think the social expectations have been sorted out for that yet, which means this is all being coded with unclear goals. That's the worst of all worlds, because in some cases indecision means the wrong people gets killed, but in some it likely means they all do.
 
As reported by ArsTechnica, people have taken videos and shown what the road actually looks more like to the human eye at night time.

Whereas the Uber dashcam looks incredibly dark (can barely even see the office building):

Screen-Shot-2018-03-22-at-10.34.27-PM-1440x691.png


It apparently looks more like this to people:

kaufman_tempe-1440x801.png


... which is precisely what I and others had been saying about the video released by Uber. People can't just treat any photo or video as if it's properly exposed. For example, take a daytime photo of a window from inside your house exposed for outside -- you will barely be able to see inside the house. Of course to you, it's perfectly lit up inside as well. Similarly, if you expose for indoors, the outside will be way overexposed. What the human eye sees is totally different.

The driver absolutely should have been able to avoid this accident (in addition to the radar/lidar systems).
 
Crosswalks were NOT invented for convenience. They are for the safety of pedestrians crossing a street at a predictable location. Jay-walking (crossing outside of a crosswalk) is illegal is most cities for this very reason...it's unsafe.

You made me think of how, as children, we would play with the children across the street and even in the street(we would step aside and wait for any cars to pass(we had seen a neighborhood child get hit by a speeding car when he ran across the street for an ice cream truck)). I wonder if that is the real problem. It might be ingrained into many peoples minds that it’s just not a serious crime.
 
I would say no to driverless cars even if it was 1 life lost per decade globally if it took us thousands of driverless car deaths to perfect the technology.
You obviously have no ideas how human civilisation and technology evolved over the past 5000 years. There’s never the perfect solution but human has the ability to learn. At present, the supposed thousands by your claim is already far less than the hundreds of thousands killed by human drivers. Logic and perspective!
 
Horrible mass transit? Compared to what? I’m not even American but I’ve travelled around the world and The US has pretty good transit.

Certainly not comparing US to Africa or India. But compared to developed parts of Asia and in Europe, yes it's horrible. Just where in the US were you? Boston? D.C. ? NYC ? Chicago? San Fran? That's not typical in the US. Dallas metro has gotten better but is still very limited, Houston's is one short ground level straight line through downtown that is frequently hitting cars. Austin has none. Most large and mid-size cities have none. The only mass transport options are buses which are often slow, unsafe, and unreliable. I invite you to experience using the buses in Miami to get from the coast to the airport with your luggage in tow.
 
The NTSB has released its preliminary report on the crash (pdf). It says that the system detected the pedestrian six seconds ahead of the crash but struggled to identify it. The report includes this head-scratcher near the bottom of page 2:

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision … According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

I think a bit of outraged profanity would be in order here.
 
The NTSB has released its preliminary report on the crash (pdf). It says that the system detected the pedestrian six seconds ahead of the crash but struggled to identify it. The report includes this head-scratcher near the bottom of page 2:

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision … According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

I think a bit of outraged profanity would be in order here.
 
The NTSB has released its preliminary report on the crash (pdf). It says that the system detected the pedestrian six seconds ahead of the crash but struggled to identify it. The report includes this head-scratcher near the bottom of page 2:

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision … According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

I think a bit of outraged profanity would be in order here.
The outrage of profanity you'll hear from me is from my car as I yell at the people around me that can't drive worth a damn. By the time I left my apartment complex this morning, I had seen THREE idiotic maneuvers on the road. While it's clear Uber's software bungled the situation, I would still trust their cars over the 'Murica drivers I have to deal with on a daily basis.

Let's not forget Arizona had 10 pedestrian deaths in ONE week recently: https://www.azcentral.com/story/new...rian-deaths-week-show-major-crisis/422808002/

Can't blame Uber for those and unlike the Uber car, people won't learn from other people's mistakes....
 
This was nothing less than an utter failure! 'Sensors detected her just six seconds before impact.' That's a real problem. This was clear weather, what would have happened in bad weather - heavy rain, snow, fog and such. In six seconds it could not figure out that a collision was about to occur! Identifying the object is a lessor problem the fact that the car is on a collision course with an unknown object should have taken precedence and corrective action be taken. The driver was not alerted to a problem, That is criminal. Depending on an unobservant negligent driver to intervene completes the farce.
 
  • Like
Reactions: LogicalApex
How does one even expect the average human to be able to take over fast enough in all emergencies that the car cannot handle?

Even airline pilots are known to fall asleep when the plane is on autopilot.
http://www.bbc.com/news/uk-24296544
[doublepost=1527218039][/doublepost]
While it's clear Uber's software bungled the situation, I would still trust their cars over the 'Murica drivers I have to deal with on a daily basis.

The real question is whether you would trust their software/cars over your driving skills. :)
 
The outrage of profanity you'll hear from me is from my car as I yell at the people around me that can't drive worth a damn. By the time I left my apartment complex this morning, I had seen THREE idiotic maneuvers on the road. While it's clear Uber's software bungled the situation, I would still trust their cars over the 'Murica drivers I have to deal with on a daily basis.

Let's not forget Arizona had 10 pedestrian deaths in ONE week recently: https://www.azcentral.com/story/new...rian-deaths-week-show-major-crisis/422808002/

Can't blame Uber for those and unlike the Uber car, people won't learn from other people's mistakes....

You are missing the darker problem exposed here. Which is one I have mentioned a few times and is sadly realized here.

Uber decided to code this system without any real safety in mind and put human driver in the vechicle as a diversion from this reality. There is absolutely no excuse for this system not alerting the human safety driver (and that is ignoring the inability to properly identify the person earlier than 6 seconds or any of the other troubling factors revealed here).

Technology isn’t automatically better or safer just because computers can crunch more data. Without strong regulations on what should be done to ensure safety and strong punishments for ignoring them. We are all at risk of companies choosing profits or other business interest over human lives. This clearly happened here and could easily happen again...

We are placing too much blind trust in corporate black boxes hidden away as technologically advantageous...

Self Driving cars should be pursued but there needs to be minimum safety requirements and independent testing and auditing well before innocent human lives are placed at risk. This report unequivocally proves my point.
 
  • Like
Reactions: ghanwani
The outrage of profanity you'll hear from me is from my car as I yell at the people around me that can't drive worth a damn.

Self-driving cars, properly done, are meant reduce this problem by eliminating the stresses of driving for their occupants. They can go a little slower because the passengers are not getting irritated by other drivers and trying to get there faster. And with intercommunication, they should be able to coördinate with other self-driving vehicles.

To them, you the Luddite human driver will present an obstacle. Manually and automatically piloted vehicles sharing the road will present some unique challenges.
 
You are missing the darker problem exposed here. Which is one I have mentioned a few times and is sadly realized here.

Uber decided to code this system without any real safety in mind and put human driver in the vechicle as a diversion from this reality. There is absolutely no excuse for this system not alerting the human safety driver (and that is ignoring the inability to properly identify the person earlier than 6 seconds or any of the other troubling factors revealed here).

Technology isn’t automatically better or safer just because computers can crunch more data. Without strong regulations on what should be done to ensure safety and strong punishments for ignoring them. We are all at risk of companies choosing profits or other business interest over human lives. This clearly happened here and could easily happen again...

We are placing too much blind trust in corporate black boxes hidden away as technologically advantageous...

Self driving cars should be pursued but there needs to be minimum safety requirements and independent testing and auditing well before innocent human lives are placed at risk. This report unequivocally proves my point.
I didn’t miss your point.

The faster these cars learn how to drive, the more lives we’ll save. People are going always going to die from car crashes, even when self-driving tech is refined. Someone probably died during this forum conversation. This is a MAJOR problem that deserves our full attention. As tragic as this was, death has and will continue to be a part of automobiles. We shouldn’t stop pursuing autonomous technology because of an accident.

Uber took their cars off the roads, re-evaluated the situation and eventually will deploy their cars for further testing. Do I fully trust self-driving cars? Of course not. Testing on private roads is a paradox. How can one be sure the test environment will work as well as the real one? You can’t, which is why these cars are sharing the roads with us.
[doublepost=1527220696][/doublepost]
The real question is whether you would trust their software/cars over your driving skills. :)
Buy me a Waymo van and I’ll glady give you my Mazda. ;)
 
I didn’t miss your point.

The faster these cars learn how to drive, the more lives we’ll save. People are going always going to die from car crashes, even when self-driving tech is refined. Someone probably died during this forum conversation. This is a MAJOR problem that deserves our full attention. As tragic as this was, death has and will continue to be a part of automobiles. We shouldn’t stop pursuing autonomous technology because of an accident.

Uber took their cars off the roads, re-evaluated the situation and eventually will deploy their cars for further testing. Do I fully trust self-driving cars? Of course not. Testing on private roads is a paradox. How can one be sure the test environment will work as well as the real one? You can’t, which is why these cars are sharing the roads with us.

I agree. I live in Arizona and I can tell you first hand the drivers here are becoming more and more dangerous. This is coming from someone who used to live in California, the crazy driver capital of he world. Autonomous drivers are a welcome sight in AZ as far as I am concerned.
 
Self-driving cars, properly done, are meant reduce this problem by eliminating the stresses of driving for their occupants. They can go a little slower because the passengers are not getting irritated by other drivers and trying to get there faster. And with intercommunication, they should be able to coördinate with other self-driving vehicles.

To them, you the Luddite human driver will present an obstacle. Manually and automatically piloted vehicles sharing the road will present some unique challenges.

There are tons of challenges yet figured out for fully-automated cars.

However, they must be able to deal with everything they encounter on the roads, including human-operated vehicles, but also cyclists, pedestrians, construction zones, snow storms, power outages, and about an infinite number of other situations to be able to immediately and correctly "see" and identify and respond correctly.
[doublepost=1527221096][/doublepost]
The faster these cars learn how to drive, the more lives we’ll save. People are going always going to die from car crashes, even when self-driving tech is refined.

Except now you're changing who is dying from crashes, which poses an ethical dilemma.

Of course, you could just put all of the collision-avoidance tech on human-operated vehicles and call it a day.
 
There are tons of challenges yet figured out for fully-automated cars.

However, they must be able to deal with everything they encounter on the roads, including human-operated vehicles, but also cyclists, pedestrians, construction zones, snow storms, power outages, and about an infinite number of other situations to be able to immediately and correctly "see" and identify and respond correctly.

Not to mention they have to be capable of deal with all combinations of partial internal system failures and even a total system failure.

If a self-driving car requires a driver, the value prop is significantly diluted, because it means it cannot be used for folks that are unable to drive.

I think @LogicalApex made a very good point.
We are all at risk of companies choosing profits or other business interest over human lives.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.