You keep bringing up the safety factor for self driving cars. All I asked for was statistics to show what you are claiming.
Edit: Looks like Macrumors can't display my link correctly. Use Google.
Last edited:
You keep bringing up the safety factor for self driving cars. All I asked for was statistics to show what you are claiming.
Planes aren't surround by objects a few feet away like cars are.
You keep bringing up the safety factor for self driving cars. All I asked for was statistics to show what you are claiming.
Yes, sure, always, it reads people's minds and sees the future, and it if has to brake in 5 meters but the brakes have no such capacity, it does it too, because it's a computer, you know![]()
Yes, sure, always, it reads people's minds and sees the future, and it if has to brake in 5 meters but the brakes have no such capacity, it does it too, because it's a computer, you know![]()
Math can see the future. I'm sure even a calculator from the 1980's would be able to calculate something as simple as the trajectories of moving obstacles on the street.
Should probably check out how the technology works and do some research before you comment in disbelief. The car knows to maintaina steady speed that allows for enough brake distance depending on the complexity and density of objects/people in its surroundings. It will gurantee enough distance to stop in time.
Check out this video to get a feel for how they work:
http://m.bbc.com/news/technology-31377607
So, if you're driving and suddenly a child goes into the street following a ball, and on the other side there's a biker which is acting properly, and the car has to evaluate to hit the child and avoid the biker or hit the biker and avoid the child, considering the biker wears a helmet and has much more % of surviving success, who is the car hitting? And what about the insurance? Who will be guilty? Just a simple situation... And what about using them as weapons?
Apple like going into markets where the devices have poor user experiences and where they can see a way of making them much better (e.g. iPod, iPhone, iPad). So cars fit right in with that.
But while I am happy to pay £2000 for a new laptop every few years, I am not going to get a £40,000 car. So I can't see myself ever getting one of these.
We're supposing it has detected it too, and that's a very big IF.
That's something you can't reduce to 0 so there will always be a situation where it won't be enough even if it has analized it before and acted fastly. What would do then?
My bet: the driver is texting and manages to hit both.
After the iCar, the next step will be for Apple to built the... iFanboys! Then Apple will have a cative audience for which to sell even their worst products, like the iWatch!![]()
Computers would detect obstacles much sooner than humans, also they don't get drunk.
Nothing can be reduced to 0, but computers would come much closer to it than humans can.
So if we admit that situation and the collision could be possible, again, hit the child or the bike? Who is programming that? Who is responsible? I think you've not considered the whole moral and legal implications of this.
I can't really see anything else driving around in the future. I hate driving, in fact it's one of the things (if not the thing) I hate doing the most. Self driving cars are far more comfortable and much safer. It's already proven that they are far more reliable than humans. Even the best drivers in the world can't beat computers when it comes to precision.
OK, if there's not enough space to brake and no way to leave the street then the computer would probably still brake to reduce the amount of impact energy and hit the biker because it is more protected than the child.
Probably, but then it's his fault and he'll go to jail and the insurer or him will pay. What happens if it's the system the one who kills the child?
That's all the problems Google is trying to solve and they say it is not easy.
When things go wrong is the driver at fault or Google or Apple? Just things to consider.
I'm not sure you can decide the future of safety that easy.
You see? That's why putting our future in machine's hands could be more efficient but much more uncomfortable.
So your question is to the moral authority of the programming, which is quite valid - and I won't even attempt to make the argument.
My point is: with the automated car it is likely nobody dies, or in your thought experiment, only one of the two die. I would welcome a difficult moral dilemma for the courts to hash out, if we could significantly reduce (or eliminate) mortality.
P.S. I do like your question. I'm just woefully unable to answer it, and as robots become an ever increasing stable of society - this is a question that will have to be answered by our l courts for much more than just cars.
That's for the judge to decide, but technically it would be the computer's fault.
A computer is just a machine, if it malfunctions, is asked to do more than it can, or get the wrong instructions, the "fault", i.e responsibility is always external to the machine itself.