Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I didn’t miss your point.

The faster these cars learn how to drive, the more lives we’ll save. People are going always going to die from car crashes, even when self-driving tech is refined. Someone probably died during this forum conversation. This is a MAJOR problem that deserves our full attention. As tragic as this was, death has and will continue to be a part of automobiles. We shouldn’t stop pursuing autonomous technology because of an accident.

Uber took their cars off the roads, re-evaluated the situation and eventually will deploy their cars for further testing. Do I fully trust self-driving cars? Of course not. Testing on private roads is a paradox. How can one be sure the test environment will work as well as the real one? You can’t, which is why these cars are sharing the roads with us.
[doublepost=1527220696][/doublepost]
Buy me a Waymo van and I’ll glady give you my Mazda. ;)

As I said in the post you quoted

Self driving cars should be pursued but there needs to be minimum safety requirements and independent testing and auditing well before innocent human lives are placed at risk.

We, as a society via a proper regulatory body, need minimum safety requirements defining what collision avoidance systems need to be in place. How they need to operate. What performance we expect to be reasonable in given scenarios.

These systems need reporting and audits as well as validation testing before being placed on public roads. To ensure that safety is being placed as high as reasonably possible and safety issues are quickly resolved when discovered. Including, sufficient alerts to enable human safety drivers to properly intervene as they are tasked to do.

It is inexcusable that Uber was able to put such a deficient system on the roads and kill someone. This was extremely negligent and this should give everyone pause.
 
We, as a society via a proper regulatory body, need minimum safety requirements defining what collision avoidance systems need to be in place.

It would help if we could install collision avoidance systems in human drivers. Sometimes when I am out on the road, it feels like other drivers do not consider not-hitting-stuff to be the most important part of driving.
 
How does one even expect the average human to be able to take over fast enough in all emergencies that the car cannot handle?

Even airline pilots are known to fall asleep when the plane is on autopilot.
http://www.bbc.com/news/uk-24296544
[doublepost=1527218039][/doublepost]

The real question is whether you would trust their software/cars over your driving skills. :)

I think you hit the nail on the head here. When you are manually driving the vehicle you are observing your surroundings differently than you would if you only were a passive passenger. If you see a vehicle behaving erratically you prefer to avoid a close proximity of such vehicle. However, if you are driven by a car you might not even notice such behaviour. If the car decides that human should take over the driving in emergency you might be totally out of loop what’s really going on around you. This is far from ideal. Therefore, I’m not fan of Tesla’s approach of turning off autostear in moments notice (apparently in future the autopilot also). Volvo’s approach of having segments of roads in which car (and Volvo as company) takes full responsibility of driving sounds much better than iffy autopilot on all road sections.

Regarding plane autopilots, they are really not “autopilots” as much as they are assistants. One needs to constantly monitor and adjust them. During cruise, heading and flight level adjustments are performed. During Cat 3 autolanding there is lot of going on and this requires effort of two autopilots and two pilots. Not really a self flying machine...
 
  • Like
Reactions: arkitect
As I said in the post you quoted



We, as a society via a proper regulatory body, need minimum safety requirements defining what collision avoidance systems need to be in place. How they need to operate. What performance we expect to be reasonable in given scenarios.

These systems need reporting and audits as well as validation testing before being placed on public roads. To ensure that safety is being placed as high as reasonably possible and safety issues are quickly resolved when discovered. Including, sufficient alerts to enable human safety drivers to properly intervene as they are tasked to do.

It is inexcusable that Uber was able to put such a deficient system on the roads and kill someone. This was extremely negligent and this should give everyone pause.
The technology is moving way too quickly for regulations, hence they don’t exist. In theory, what you’re saying makes sense, but the reality is we’d have to delay the release of self-driving cars to create regulations. Doing so would result in a considerable amount of deaths. The longer humans drive vehicles, the more deaths we’ll see.
[doublepost=1527248784][/doublepost]
There are tons of challenges yet figured out for fully-automated cars.

However, they must be able to deal with everything they encounter on the roads, including human-operated vehicles, but also cyclists, pedestrians, construction zones, snow storms, power outages, and about an infinite number of other situations to be able to immediately and correctly "see" and identify and respond correctly.
[doublepost=1527221096][/doublepost]

Except now you're changing who is dying from crashes, which poses an ethical dilemma.

Of course, you could just put all of the collision-avoidance tech on human-operated vehicles and call it a day.
http://asirt.org/initiatives/informing-road-users/road-safety-facts/road-crash-statistics

Annual United States Road Crash Statistics
  • Over 37,000 people die in road crashes each year
  • An additional 2.35 million are injured or disabled
  • Over 1,600 children under 15 years of age die each year
  • Nearly 8,000 people are killed in crashes involving drivers ages 16-20
  • Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person
  • Road crashes are the single greatest annual cause of death of healthy U.S. citizens traveling abroad
Let those numbers sync in. That’s ONLY for the United States. This is a global problem.

The only thing unethical is to continue to let humans drive. Driving should and will be illegal on public roads once we have a safer system in place.
 
Last edited:
The technology is moving way too quickly for regulations, hence they don’t exist. In theory, what you’re saying makes sense, but the reality is we’d have to delay the release of self-driving cars to create regulations. Doing so would result in a considerable amount of deaths. The longer humans drive vehicles, the more deaths we’ll see.

Testing a concept or technology doesn’t necessarily require extensive laws. However, the real question is what is required to from test system. In this case one can strongly assume that Uber’s concept wasn’t ready for safe on road testing. Therefore, testing requirements should be revised.

Regarding the laws, they are there for a reason. Autonomous cars are not ready yet for mass deployment. When they are I hope there is strong legislation governing these vehicles. Without a doubt many companies would like to see themselves leading the mass market penetration. Without laws this would be Wild West all over again.
 
  • Like
Reactions: arkitect
Testing a concept or technology doesn’t necessarily require extensive laws. However, the real question is what is required to from test system. In this case one can strongly assume that Uber’s concept wasn’t ready for safe on road testing. Therefore, testing requirements should be revised.

Regarding the laws, they are there for a reason. Autonomous cars are not ready yet for mass deployment. When they are I hope there is strong legislation governing these vehicles. Without a doubt many companies would like to see themselves leading the mass market penetration. Without laws this would be Wild West all over again.
The Wild West is already on our roads, as pointed out in the above post.

The technology will be refined by the time regulations and standards exist. Theorietcal regulations won’t exist for this reason, as logical as they may be. Everything will be reactionary, as the government won’t know something is a problem until it happens. Hindsight is 20/20.

Deaths are a part of transportation. Every form of transportation has deaths. When someone flies out of a Southwest plane, people say they won’t fly Southwest.....they say this as they are driving in a 2 ton death machine.
 
Last edited:
The technology is moving way too quickly for regulations, hence they don’t exist. In theory, what you’re saying makes sense, but the reality is we’d have to delay the release of self-driving cars to create regulations. Doing so would result in a considerable amount of deaths. The longer humans drive vehicles, the more deaths we’ll see.

Technology is not moving too fast for regulations that is hogwash sold by companies to ensure their self driving technology never gets properly regulated. As once it is “ready” they will switch to it is “too complicated and proprietary to disclose internal working”.

We absolutely can, and should, require that they add a speaker and lights to warn the human backup driver of hazards even if the car responds so they can mentally switch from passive passenger to active driver. We can require that these cars be able to identify obstacles correctly in X number of seconds and respond correctly X percentage of times. We can require detailed reporting from these companies about everything these cars encounter and we can require they pass an independent test before being let in the road. We also can require audits of their system code for critical safety systems much like we already do for critical systems like airline auto pilot.

Regulations won’t hinder development, but they would go a long way to ensure we are building these machines as safety first. Proponents assert that safety will improve, but we can only be sure of this with regulatory incentives.
 
Technology is not moving too fast for regulations that is hogwash sold by companies to ensure their self driving technology never gets properly regulated. As once it is “ready” they will switch to it is “too complicated and proprietary to disclose internal working”.

We absolutely can, and should, require that they add a speaker and lights to warn the human backup driver of hazards even if the car responds so they can mentally switch from passive passenger to active driver. We can require that these cars be able to identify obstacles correctly in X number of seconds and respond correctly X percentage of times. We can require detailed reporting from these companies about everything these cars encounter and we can require they pass an independent test before being let in the road. We also can require audits of their system code for critical safety systems much like we already do for critical systems like airline auto pilot.

Regulations won’t hinder development, but they would go a long way to ensure we are building these machines as safety first. Proponents assert that safety will improve, but we can only be sure of this with regulatory incentives.
If it’s hogwash, how come there aren’t any?

We are in a testing phase, regulations come AFTER testing, once you know what is critical to safety. Unfortunately only trial and error can determine what is critical to road safety. Once a mistake occurs, such as the one with Uber, a new set of rules can be put into place to prevent the same thing from happening again. This is, correct me if I’m wrong, the first death caused by a fully self-driving car. I guarantee you Uber’s self-driving cars won’t make this mistake again. There are no regulations that will prevent improperly coded software from behaving in an undesirable way.

Again, I’m not disagreeing that regulations that don’t hinder development are “bad”. Any regulation preventing further testing is bad and will result in many deaths. It’s going to take time to figure out how to regulate this technology and because of that, there isn’t a government regulatory committee in place at the present time.
 
Last edited:
If it’s hogwash, how come there aren’t any?

We are in a testing phase, regulations come AFTER testing, once you know what is critical to safety. Unfortunately only trial and error can determine what is critical to road safety. Once a mistake occurs, such as the one with Uber, a new set of rules can be put into place to prevent the same thing from happening again. This is, correct me if I’m wrong, the first death caused by a fully self-driving car. I guarantee you Uber’s self-driving cars won’t make this mistake again. There’s no regulations that can prevent improperly coded software from behaving in an undesirable way.

Again, I’m not disagreeing that regulations that don’t hinder development are “bad”. Any regulation preventing further testing is bad and will result in many deaths. It’s going to take time to figure out how to regulate this technology and because of that, there isn’t a government regulatory committee in place at the present time.

There are no regulations currently because technology companies have lobbied for it to be this way and many voters think like you do. For these companies, as I said earlier, no regulations until the technology is complete is best because they can control what regulations are put in place and ensure they are favorable to these companies.

These regulations also need a developing and testing phase, just like this technology. These can, and should, be occurring together. We can't, and shouldn't, be adding safety only after something happens.

Also, there is no guarantee that Self Driving cars will "learn" from their "mistakes". Everything that these devices encounter and do will lead to data points recorded somewhere, but how those data points are weighed and coded to be responded to (including stored) is up to the company developing this technology. Without audits and evolving regulatory frameworks we have no way of ensuring that these cars actually do learn from their mistakes in a way we expect.
 
  • Like
Reactions: danny_w
There are no regulations currently because technology companies have lobbied for it to be this way and many voters think like you do. For these companies, as I said earlier, no regulations until the technology is complete is best because they can control what regulations are put in place and ensure they are favorable to these companies.

These regulations also need a developing and testing phase, just like this technology. These can, and should, be occurring together. We can't, and shouldn't, be adding safety only after something happens.

Also, there is no guarantee that Self Driving cars will "learn" from their "mistakes". Everything that these devices encounter and do will lead to data points recorded somewhere, but how those data points are weighed and coded to be responded to (including stored) is up to the company developing this technology. Without audits and evolving regulatory frameworks we have no way of ensuring that these cars actually do learn from their mistakes in a way we expect.
The only guarantee I can make is that there will be more than 1 million deaths this year caused by human controlled vehicles. That's an epidemic that needs solving. We are solving it.

People that think like you are the reason why many people could die in the future. The phobia of self-driving vehicles is real, however there is no widespread phobia associated with vehicles driven by humans. It is in our culture to accept human educed driving deaths. It is not in our culture to accept deaths caused by robots vehicles This is a cultural issue and you're part of that culture.

Let me bring back some more numbers....

Worldwide deaths per year caused by human driving is at 1.3 million. That's the same as having yearly deaths in:
- 6,500 commercial plane crashes killing everyone on board
- 4,333 Amtrak train crashes killing everyone on board
- 433 cruise ships sinking killing everyone on board

Additionally, human educed crashes are costing us $518 billion globally and cost an average American $820 out of their yearly paycheck.

Are those death totals newsworthy enough? Who's being "trained to think" in a certain way?
 
Last edited:
The NTSB has released its preliminary report on the crash (pdf). It says that the system detected the pedestrian six seconds ahead of the crash but struggled to identify it. The report includes this head-scratcher near the bottom of page 2:

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision … According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

I think a bit of outraged profanity would be in order here.

You'd think at a minimum it would slow down, start a turn to avoid the danger in front of the car, and sound the car's horn to alert both the driver and the pedestrian. Reaction times in humans vary, but they're typically 200-500 msec - so there was plenty of time at 1.3 secs before the accident for the pedestrian to move out of the way. On the other hand relying on a disengaged half-asleep bored driver to wake up, evaluate the situation, and make an appropriate reaction in a little over a second: what could possibly go wrong? Heads should roll.

I am becoming increasing fascinated about AI safety. There are known issues in published academic articles, like making sure there is proper human oversight during the learning and refinement of the AI agents' behavioral policies, so why wouldn't engineers be aware of this?
 
  • Like
Reactions: danny_w
Technology is not moving too fast for regulations that is hogwash sold by companies to ensure their self driving technology never gets properly regulated. As once it is “ready” they will switch to it is “too complicated and proprietary to disclose internal working”.

We absolutely can, and should, require that they add a speaker and lights to warn the human backup driver of hazards even if the car responds so they can mentally switch from passive passenger to active driver. We can require that these cars be able to identify obstacles correctly in X number of seconds and respond correctly X percentage of times. We can require detailed reporting from these companies about everything these cars encounter and we can require they pass an independent test before being let in the road. We also can require audits of their system code for critical safety systems much like we already do for critical systems like airline auto pilot.

Regulations won’t hinder development, but they would go a long way to ensure we are building these machines as safety first. Proponents assert that safety will improve, but we can only be sure of this with regulatory incentives.

I foresee many problems, suppose you are involved in an accident with one of these vehicles. How do you receive legal recourse in the court system? There is no "driver" to sue and any damage award moot since a driverless car has no assets. The manufacturer of the self driving system perhaps. What about all the software design members collectively or individually? The same for other supporting engineers, managers or other decision makers. How are these people identified and documented? How long can they be held liable? At what point is liability cut off? It all sounds very expensive.

Most states require a driver's license and insurance to operate a vehicle on the road. How are these vehicles going meet the standards that apply to all. My experience with AI devices leads me to believe that the passing rate for a driver's road test, something all drivers have to take at some point, will be low.
My expectations will are that advocates will make excuses and demand an alternate standard. My guess it will be some nation wide standard that supersedes state testing. I fear such a two tier system because in such a system, there is no accountability, anywhere period. There are many questions and precious few answers.
 
  • Like
Reactions: LogicalApex
Annual United States Road Crash Statistics
  • Over 37,000 people die in road crashes each year
  • An additional 2.35 million are injured or disabled
  • Over 1,600 children under 15 years of age die each year
  • Nearly 8,000 people are killed in crashes involving drivers ages 16-20
  • Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person
  • Road crashes are the single greatest annual cause of death of healthy U.S. citizens traveling abroad
Let those numbers sync in. That’s ONLY for the United States. This is a global problem.

The only thing unethical is to continue to let humans drive. Driving should and will be illegal on public roads once we have a safer system in place.

You miss my point: The best method is to put collision-avoidance on human-operated vehicles.

Everything that makes the self-driving vehicle safer can be added to a human-operated vehicle. And, being human-operated, all the things that make the self-driving car's sensory/"vision" flawed is infinitely improved.

And you also neglect the ethical dilemma of now changing who is killed in the accidents.
[doublepost=1527261202][/doublepost]
Let me bring back some more numbers....

Worldwide deaths per year caused by human driving is at 1.3 million.

How many of those deaths will be solved by urban self-driving cars in the first world?

Have a look at the countries with the highest vehicle fatalities, and those with the lowest:

https://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate

Don't get me wrong -- the quality of drivers is plummeting, with everyone either drunk, high, or on their phones. Tomorrow's world, everyone!
 
Last edited:
The technology is moving way too quickly for regulations, hence they don’t exist. In theory, what you’re saying makes sense, but the reality is we’d have to delay the release of self-driving cars to create regulations. Doing so would result in a considerable amount of deaths. The longer humans drive vehicles, the more deaths we’ll see.
[doublepost=1527248784][/doublepost]
http://asirt.org/initiatives/informing-road-users/road-safety-facts/road-crash-statistics

Annual United States Road Crash Statistics
  • Over 37,000 people die in road crashes each year
  • An additional 2.35 million are injured or disabled
  • Over 1,600 children under 15 years of age die each year
  • Nearly 8,000 people are killed in crashes involving drivers ages 16-20
  • Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person
  • Road crashes are the single greatest annual cause of death of healthy U.S. citizens traveling abroad
Let those numbers sync in. That’s ONLY for the United States. This is a global problem.

The only thing unethical is to continue to let humans drive. Driving should and will be illegal on public roads once we have a safer system in place.

Tech increases occur at running speeds and regulation at a crawl ...
and how do we get the Congress Critters off their knees and educate them on the new tech?
 
The only guarantee I can make is that there will be more than 1 million deaths this year caused by human controlled vehicles. That's an epidemic that needs solving. We are solving it.

People that think like you are the reason why many people could die in the future. The phobia of self-driving vehicles is real, however there is no widespread phobia associated with vehicles driven by humans. It is in our culture to accept human educed driving deaths. It is not in our culture to accept deaths caused by robots vehicles This is a cultural issue and you're part of that culture.

Let me bring back some more numbers....

Worldwide deaths per year caused by human driving is at 1.3 million. That's the same as having yearly deaths in:
- 6,500 commercial plane crashes killing everyone on board
- 4,333 Amtrak train crashes killing everyone on board
- 433 cruise ships sinking killing everyone on board

Additionally, human educed crashes are costing us $518 billion globally and cost an average American $820 out of their yearly paycheck.

Are those death totals newsworthy enough? Who's being "trained to think" in a certain way?

So why then, are we not investing in better public transport infrastructure?

I'll tell you why.

This isn't about safety. This is about money, and LOTS of money. Safety is just the carrot that is being dangled in front of people so they willingly give up their freedom and allow people to make their money.

Facebook, Alexa, Amazon with a key to your house ... all in the name of comfort and convenience.
 
  • Like
Reactions: danny_w
So why then, are we not investing in better public transport infrastructure?

I'll tell you why.

This isn't about safety. This is about money, and LOTS of money. Safety is just the carrot that is being dangled in front of people so they willingly give up their freedom and allow people to make their money.

Facebook, Alexa, Amazon with a key to your house ... all in the name of comfort and convenience.
It’s irrelevant what the cause is, the effect will be less deaths. Building a completely autonomic infrastructure isn’t financially feasible.

Money rules the world, which is why it’s also irrelevant what opinions people have about self-driving vehicles. They are coming whether people like it or not. Advancements in technology steamroll through public opinion.
 
Tech increases occur at running speeds and regulation at a crawl ...
and how do we get the Congress Critters off their knees and educate them on the new tech?

I have my concerns about the pace of tech. We have built an entire foundation of business upon the internets, on systems that are minimally designed where we now have plenty of room to maximize the software for security purposes but still rely on dubious shortcuts. A more methodical, conservative approach than is dictated by market forces would probably be better for everyone.

A parallel analogy might be the automobile itself. Vehicles rapidly became faster, more powerful, with increasing range, before drivers themselves were allowed to adapt to the changes but simply given fancier toys. Then we added seatbelts, airbags, ABS, traction control, making drivers feel that improved safety allowed them to take more risks. The peripheral consequences of progress are often not recognized until after we are stuck cleaning up after them.
 
  • Like
Reactions: VulchR



An autonomous test vehicle being tested by Uber struck and killed a woman in Tempe, Arizona late Sunday night, marking what appears to be the first pedestrian killed by an autonomous vehicle, reports The New York Times.

The Uber vehicle in question was in an autonomous driving mode with a human safety driver at the wheel, and the woman who was struck was crossing the street outside of a crosswalk, according to local police. No other details on the accident are available at this time.

lexussuvselfdriving2-800x511.jpg

One of Apple's autonomous test vehicles
Uber is cooperating with Tempe police and has suspended all of its self-driving vehicle tests in Tempe, Pittsburgh, San Francisco, and Toronto at the current time. Uber's autonomous vehicles have previously been involved in collisions, as have vehicles from other companies like Tesla, but this is the first pedestrian-related accident that has resulted in a fatality.

This incident will likely have wide-ranging implications for all companies who are testing autonomous vehicles, including Apple, and it could potentially result in more oversight and regulation.

Apple has been testing its autonomous vehicles on public roads in California near its Cupertino headquarters since last year. Apple vehicles, which include a series of Lexus RX450h SUVs equipped with a host of sensors and cameras, have not been involved in any known accidents to date.

To date, most autonomous vehicles in California and Arizona have been using safety drivers behind the wheel who are meant to take over in the event of an emergency, but California in February lifted that rule.

Starting on April 2, companies in California that are testing self-driving vehicles will be able to deploy cars that do not have a driver behind the wheel. Arizona also allows driverless cars to be tested in the state, and Waymo has been testing autonomous driverless minivans in Arizona since November.

Update: Tempe police chief Sylvia Moir told the San Francisco Chronicle that based on a preliminary investigation, it does not appear Uber is at fault in the accident. "It's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway," she said. She also clarified that the Uber vehicle did not make an attempt to brake.

Moir did say, however, that she will not rule out the potential to file charges against the back-up driver in the vehicle. Tempe police will work with investigators from the National Transportation Safety Board and the National Highway Traffic Safety Administration to further investigate the accident.

Article Link: Self-Driving Uber Car Kills Pedestrian in Arizona, Accident Could Have Implications for Autonomous Vehicle Testing
 
As reported by ArsTechnica, people have taken videos and shown what the road actually looks more like to the human eye at night time.

Whereas the Uber dashcam looks incredibly dark (can barely even see the office building):

Screen-Shot-2018-03-22-at-10.34.27-PM-1440x691.png


It apparently looks more like this to people:

kaufman_tempe-1440x801.png


... which is precisely what I and others had been saying about the video released by Uber. People can't just treat any photo or video as if it's properly exposed. For example, take a daytime photo of a window from inside your house exposed for outside -- you will barely be able to see inside the house. Of course to you, it's perfectly lit up inside as well. Similarly, if you expose for indoors, the outside will be way overexposed. What the human eye sees is totally different.

The driver absolutely should have been able to avoid this accident (in addition to the radar/lidar systems).

They obviously made it darker to make it appear that the pedestrian is more at fault. I've heard people saying that the area is well lit. They are trying to make themselves look not as bad by darkening it.
 
..,A parallel analogy might be the automobile itself. Vehicles rapidly became faster, more powerful, with increasing range, before drivers themselves were allowed to adapt to the changes but simply given fancier toys. Then we added seatbelts, airbags, ABS, traction control, making drivers feel that improved safety allowed them to take more risks. The peripheral consequences of progress are often not recognized until after we are stuck cleaning up after them.
Your opinion is that the addition of safety features to autos means that drivers can driver riskier and powerful engines are the enabler? I don’t make that leap.

To me it safety features means better ways of avoiding an accident as in “drive defensively”. Not drive faster because your in vehicle radar can slam on the brakes for you.
 
They obviously made it darker to make it appear that the pedestrian is more at fault. I've heard people saying that the area is well lit. They are trying to make themselves look not as bad by darkening it.
I'm not sure if I agree that Uber is deliberately darkening the video. It just looks like they provided the video from a dashcam that doesn't have "night vision". Does it help Uber frame a defense? Absolutely.

For everyone that thinks that it was so dark and so badly lit in that area, think about this. If the Uber video represents what a human driver would see with the headlights on, then that's totally unsafe. Think about how far ahead you can see with your headlights on and that's very likely what the back-up driver would have seen had she been looking up.

I actually think that a human driver would have seen the pedestrian early enough to either slow down and possibly avoid the incident. It's not like see jumped out at the last second. She was slowly moving across the street.
 
To me, the criminal part is having a system to NOT alert the driver. I get the sensors having a hard time identifying the object, but as soon as that happens, ALERT THE DRIVER. You know, "I see something, what do you think?" type of logic. That allows you to refine the AI.
 
  • Like
Reactions: LogicalApex
How does one even expect the average human to be able to take over fast enough in all emergencies that the car cannot handle?

Even airline pilots are known to fall asleep when the plane is on autopilot.
http://www.bbc.com/news/uk-24296544
[doublepost=1527218039][/doublepost]

The real question is whether you would trust their software/cars over your driving skills. :)
As a driver you must be alert at all times. At the time of the accident, conditions were good and visibility was good. The operator of the Uber vehicle was looking at her phone and not at the Road.
 
As a driver you must be alert at all times. At the time of the accident, conditions were good and visibility was good. The operator of the Uber vehicle was looking at her phone and not at the Road.

NTSB report says driver wasn’t looking at their phone. They stated they were looking at the computer screen Uber places in the vehicle with status information on the system.
 
NTSB report says driver wasn’t looking at their phone. They stated they were looking at the computer screen Uber places in the vehicle with status information on the system.
That's even worse. The design of the Uber system sounds worse and worse. They have a computer screen for the back-up driver to look at, that makes them take their eyes off the road. Meanwhile, they are doing testing that disables the auto-braking system that comes standard with that car. On top of that, I believe, in this particular situation, they also turned off Uber's own auto-braking system because "testing".

I mean, who comes up with this stuff? Uber needs to take a long hard look at themselves and what they are doing. Seems to be a bunch of cowboys running these trials. If they're looking to test certain aspects of their programming and turning things off (like auto-braking), then they should be doing it with multiple back-up drivers.

Seems like safety is not a priority with Uber.

I'm not a Luddite. I know AI cars are inevitable. I just think that Uber shouldn't be at the forefront of this. Leave it to the Apples, Googles and Volvos of the world.
 
  • Like
Reactions: miscend
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.