Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Regardless, I'd still be nervous not having 100% positive control over the vehicle. It would take getting used to and developing a trust in the technology to adapt to any and all situations.

Great point, I feel the same way. I think the tech is great, and probably the future of driving. But, the idea of no one behind the wheel is pretty scary.

Even when my wife drives, which is hardly ever if I am in the car with her, I either have to act like I am driving by looking out for stuff, or close my eyes and hope for the best.

There is too much stuff that could go wrong with a computer driving the car, assisted or fully automated.
 
If you want to have autopilot, you have to go into the car's settings, read through a short paragraph explaining what it can and can't do, and then hit a button ...

And, you trust that everyone out there driving around with one of these things has done this? Do you read all those notices when you install software or sign up for services?

This is a vehicle, hurtling down the road (hopefully staying on the road) at high speeds, which weighs thousands of pounds. It's not some recipe software.

I'm not worried that some Darwin-awards-candidate will fail to read it and take him/her-self out.... I'm worried that family, friends, myself, or any non-Darwin-awards-candidate might be driving on the same road!

The term "autopilot" is not what's in question here.

It's "zidong jiashi", which translates literally as "self driving".

I could care less about term definitions and translations. What does the term (in whatever market) mean to the general audience that might purchase and drive it.

Its very interesting to see the progress companies like Tesla are making in this field. Clearly we have a long way to go before these types of cars could become mainstream and we have to expect accidents and problems along the way - I just hope there aren't too many more accidents, injuries or deaths.

We're just going to hope there aren't too many more accidents and deaths? How about actually trying to understand the state of the technology and actual limitations - and adjusting the laws and what these companies can do - instead of just hoping. Sheesh!

Yes, sensors will improve... programmers will think of more situations to include... computers will get faster (allowing more analysis). This will make it appear more and more 'competent' to the uninformed. But, it's never going to be what many futurists think of as 'self-driving' or AI or whatever. That's a qualitative problem, not a quantitative one.

What if it runs over someone while it's driving up to your front door, who's responsible?

I guess the owner/driver/summoner is. Just hope you're not the pedestrian!

More likely, it'll be something terrible (kids doing chalk drawings in the driveway?) Their system doesn't avoid a stopped car at the side of the road - will it recognize kids huddled on the pavement?? *shudder* Anyone with kids should be taking their Tesla's in and having this feature disabled...hard disabled - don't even have the damn code in their car.

And, I've seen people 'testing' this in parking lots on YouTube videos, so people haven't even been obeying the private property limitations. Whether it will detect them or not depends on the sensors and what it's programmed to detect. From watching YouTube videos, I doubt it would. It seems insanely primitive. It could do that much better in the future, I suppose. But even then, if it encounters a situation the programmers didn't think of... oops!

Completely autonomous vehicles aren't legal on public roads. Summon is limited to private property. ... you are responsible for being aware of anything that's shorter than the bumper or hanging in the air ...

A lot of good that will do after you or your kid are flattened by one.
 
This has nothing to do with Apple.
I disagree. (Semi-)self-driving cars are an emerging field, which Apple is strongly rumored to be entering. This site is for rumors relating to Apple. It's relevant. It's not the most relevant (unlike a article about the new Mac Pro coming out tomorrow), but it's relevant.
 
And Apple is trying their hand at self driving cars. This is a disaster in the making. All you have to do is look at any of Apple's products and services.
Don't worry just look at the "Waiting for Skylake" thread under MBP, Apple is 20yrs + away and with a little help from Intel 30+ yrs away. :))
 
More likely, it'll be something terrible (kids doing chalk drawings in the driveway?) Their system doesn't avoid a stopped car at the side of the road - will it recognize kids huddled on the pavement?? *shudder* Anyone with kids should be taking their Tesla's in and having this feature disabled...hard disabled - don't even have the damn code in their car.
Meanwhile, people are backing their SUVs over their kids in driveways, due to limited visibility and inattention. This is an actual thing. Shouldn't there be equal - or more - outrage/concern about current things causing actual vehicle-involved child fatalities in driveways, vs. getting worried about what might happen?

And it's not like the code is going to leap out from behind a tree and bite you. Just don't use it. Airbags could kill you. They're pretty much a loaded gun pointed at your face. Everyone should be taking their cars in and demanding to have the airbags removed - don't even have the damn things in their car. And why limit the warning to "Anyone with kids"? By the same logic, wouldn't you want to demand the code be removed from the cars of anyone who visits you who has a Tesla? They might pull into your driveway, too. And your neighbor, or some random stranger (or their car) might even pull into your driveway simply in the process of turning around.

The interesting part is, the article doesn't give much context for the crash. The stopped car might have presented quite suddenly (say, coming around a bend in the road), and the car's options could easily have been limited by having other cars alongside: braking is insufficient to keep from hitting the stopped car, but if you swerve right, you push the car next to you into a busload of nuns - what action do you take?

The really interesting part is, how should a car respond in such a situation? How would a human respond? Different (and differently trained) humans might respond in a variety of ways. And humans might swear up and down that they would respond one way, but then actually do something quite different if the situation presented itself. Computers can make a decision faster, and based on more data, than a human, in such a circumstance, but the computer can only take into account the factors that were programmed in when the code was written. The long-term solution is, make such situations orders of magnitude less frequent (if the stopped car was transmitting its status to some hypothetical traffic conditions system, or, say, road monitoring cameras detected the non-moving mass in the roadway, then the Tesla-or-whatever could know to slow before reaching the stopped car), but it will be very interesting to see how such systems get programmed in the future - do the companies optimize for least liability? least loss of life? least property damage? most lawful? or some combination of these factors? or something else entirely? What if "most lawful" conflicts with "least loss of life"? How would you have it decide between two options, one of which gives a small likelihood of causing a death and another which gives a high likelihood of causing major injuries (losses of limbs for multiple people, say)? Do you give preference in otherwise equal scenarios, to solutions that save the occupants of the car in preference to other people*, or the reverse? (And what does it do to your company's reputation/sales if the public learns that your cars are programmed to sacrifice their occupants in order to save bystanders?) A human driver will simply do what seems like the right thing (or more cynically, the thing most beneficial to them) at the time, taking into account their sense of right and wrong, but also current mood and a whole range of life experiences. Given sufficient input, the computer can evaluate all the data and make the decision quicker than you, but it can only follow the instructions it's been given.

*: (I once heard a caller on a talk show explaining with some enthusiasm how he had been in an collision while he and his family were in a Lincoln Navigator, and the other guy was in some small economy car, and the guy in the other car died, and the caller was very enthusiastic about how the Lincoln Navigator had protected his family and boy was he glad they were driving one and he was totally going to continue to buy them in the future... giving no consideration whatsoever to the idea that maybe, if he hadn't been driving a 6,000lb Armored Personnel Carrier - uh, SUV - the guy in the small car might have lived through the accident.)

(And just for fun, imagine malware that optimizes for most loss of life, and pushes the busload of nuns over a cliff. New terrorist threat?)
[doublepost=1471284434][/doublepost]
Cars have cruise control which is a perfectly ok term for cars. Planes have autopilot which allows pilots to remove their attention from keeping the plane at a certain altitude or direction. Very different things.
Well, technically, the autopilot in the car also lets the driver remove their attention from keeping the car at a certain speed or direction. In both cases, the driver/pilot needs to be keenly aware of what is in front of them. If your plane runs into another, saying "but I had the autopilot on" won't help.
 
Last edited:
Meanwhile, people are backing their SUVs over their kids in driveways, due to limited visibility and inattention. This is an actual thing. Shouldn't there be equal - or more - outrage/concern about current things causing actual vehicle-involved child fatalities in driveways, vs. getting worried about what might happen?
...

Yep, remember reading an article recently on just that (the decision tree that they'll have to put into the AI of these cars). I think this is something that the NTSB should be debating and deciding NOW, so the structures are in place for autonomous vehicles when the technology gets mature enough.

And, yes, if someone showed up in my driveways with one of these cars, I would be asking if it had this capability in its software (maybe I wouldn't ask them to leave, but may box them in so the car can't run off on its own and harm someone).

The debate over what hackers can do in the Internet of Things era only continues to get more and more interesting (and a little scary). The autonomous car subject (and murder via malware) was actually the subject of an episode of "Elementary" this past season.
 
  • Like
Reactions: SteveW928 and CarlJ
If you want to have autopilot, you have to go into the car's settings, read through a short paragraph explaining what it can and can't do, and then hit a button acknowledging you've read it and you want the feature enabled.

If you don't know the limitations, it's because you don't care (you can obviously read or you wouldn't have been able to get through the menus in the first place.)

Autopilot is a perfectly apt phrase. In a plane, autopilot takes over the mundane part of flying - cruising at altitude. In a car, autopilot takes over the mundane part of driving - cruising on the highway.

Nonsense. On modern commercial planes autopilot can take care of pretty much everything except taxi to the gate. Are you aware that in low visibility autopilot's auto landing must be used. Compare this to Tesla's "autopilot" which can't be used in any remotely challenging conditions.

Your "mundane" tasks carried out by the autopilot:



Autopilot on Tesla is hack job at best. Compare this with Volvo's implementation and how they are going to take responsibility if car crashes and then you know the difference between autopilot which is working like one and autopilot which is just marketing talk.
 
  • Like
Reactions: Wowereit
And, yes, if someone showed up in my driveways with one of these cars, I would be asking if it had this capability in its software (maybe I wouldn't ask them to leave, but may box them in so the car can't run off on its own and harm someone).
I understand what you're saying, but there's a common tendency to attribute massive importance to novel risks - ones that have received recent media attention, or seem unfamiliar - all out of proportion to how those risks compare to risks that people unknowingly/unthinkingly accept in day-to-day life. When you visit someone's house, do you inquire/inspect, upon arrival, to make sure their kitchen knives are properly locked up, to prevent possible injury? Your odds of someone in the house going nuts and stabbing you or someone else are right up there with the odds of the car going rogue and running someone over.
 
Your odds of someone in the house going nuts and stabbing you or someone else are right up there with the odds of the car going rogue and running someone over.
Heh. It's not really "going rogue" :) with how inadequate the software is, it could simply be following the "summon" request. Hopefully they've at least built into that if you're more than x distance away it can't be used.
 
  • Like
Reactions: SteveW928
Wording aside, who buys a car like that without knowing what it can do and what it cannot?

Anybody with $80,000 (or whatever) to spend on a Tesla.

If you want to have autopilot, you have to go into the car's settings, read through a short paragraph explaining what it can and can't do, and then hit a button acknowledging you've read it and you want the feature enabled.

Wrong! You have to scroll past a short paragraphand click on "accept" - neither reading, comprehension, nor intent to abide by the warning is required. Our lawyer infested society (which doesn't care about preventing accidents, only about assigning liability after the fact) has done a splendid job of teaching us to ignore this kind of safety warning and other terms and conditions.

Autopilot is a perfectly apt phrase. In a plane, autopilot takes over the mundane part of flying - cruising at altitude. In a car, autopilot takes over the mundane part of driving - cruising on the highway.

No. Planes (and boats, which also have autopilots) fly/sail a long way from other planes/boats, and a long way from the ground/land and pretty much follow compass bearings. The pilot can take their hands of the stick/wheel/rudder and do something else because they will have minutes to respond to hazards - plus (at least in planes) the pilot is far more rigorously trained (10% how to fly, 90% how to fly safely). In contrast, even going down a freeway, cars continually pass within a few feet of each other and the driver has to be able to react immediately. "Autopilot" in the traditional sense is not applicable to cars: its full autonomy (that can match an average human driver in reacting to hazards) or nothing. A system like Telsa that allows drivers to take their hands off the wheel and take their attention off the road is insane until Tesla are ready to take away the small print.

Autopilot, from Wikipedia:

Right, because people are going to look up "Autopilot" on wikipedia before deciding to take their hands off the wheel. Not.

It's "zidong jiashi", which translates literally as "self driving".

...as does "Autopilot" to 90% of the English-speaking population.

I think it is the inflatable pilot from the Airplane movies.

At last! Someone with a realistic grasp of the human condition... I think the best you can say is that the majority of people understand that a blow-up-doll is not involved.

It doesn't matter what the technical definition of autopilot is. What matters is how people interpret it and whether it makes roads more dangerous.

Sorry - this is the internet. You're meant to base your argument on the wikipedia definition of "autopilot" and let the (undoubted) negligence of the drivers in these incidents absolve everybody else of any responsibility whatsoever.

Meanwhile, people are backing their SUVs over their kids in driveways, due to limited visibility and inattention.

True - and, long-term, self-driving cars have the potential to be much safer than human drivers. Many modern cars already have features that should be helping reduce such accidents.

The problem with Tesla is that their over-eagerness to introduce self-driving features - whatever they name them - before they're ready could cause a backlash that puts the clock back. Hint: if your car includes a feature that has to come with a click-through warning to not use that feature in the obvious way then that feature isn't ready. Accidents involving Tesla, Google, Apple et. al. are going to receive disproportionate publicity and the press, public and politicians are not going to put them in perspective against boring old everyday tragedies.

It shouldn't be called "autopilot", much less "self drive" in Chinese, and it should scream blue murder as soon as it detects that the driver isn't holding the wheel. Until its been proven safer than the typical driver.
 
  • Like
Reactions: SteveW928
This has nothing to do with Apple.
Apple's rumored car? And by rumored, it's pretty much confirmed considering they've been poaching engineers from car companies and leasing/buying large factory like buildings. Apple will presumably take a stab at auto pilot since it seems to be the upcoming hot feature.
 
Meanwhile, people are backing their SUVs over their kids in driveways, due to limited visibility and inattention. This is an actual thing. Shouldn't there be equal - or more - outrage/concern about current things causing actual vehicle-involved child fatalities in driveways, vs. getting worried about what might happen?

Yes, but there are bazillions of people in normal cars backing out of driveways each day. There are only a few of these things around. If there were bazillions of these backing out of driveways (given the current state of the tech), there would be a LOT more kids backed over!

And, the difference is that yes, due to visibility or inattention, accidents can and do happen. Technology can certainly assist in helping reduce those accidents. But, the so-called autonomous vehicle won't even know it's backed over a kid unless it's sensors prevent it or the software is designed to detect that it met with some resistance and has the proper program in place to do the analysis necessary to figure out it ran someone over.

A human might accidentally run you over (or intentionally, if they are homicidal). A computer controlled vehicle in motion WILL run you over unless it's properly programmed (with adequate sensor capability) not to.

And it's not like the code is going to leap out from behind a tree and bite you. Just don't use it.

But, someone else using it, might run me over (or worse).

The interesting part is, the article doesn't give much context for the crash. The stopped car might ...

Why don't you go watch the video of it? I sometimes wonder if the fans of this stuff even have a grasp on how immature the tech really is. Obviously, one guy was naive enough about it to lose his life/head, and he was supposedly a futurist, tech type person.

The really interesting part is, how should a car respond in such a situation? How would a human respond? Different

Absolutely different. A human will use whatever judgement, training, abilities they have in that moment to assess the situation and make a decision. (That's why we should be pushing for more training, stricter laws on impaired/distracted driving, etc.) The computer, on the other hand, follows a program. If we know the sensor inputs, we'll know exactly what it will do or fail to do.

Computers can make a decision faster, and based on more data, than a human, in such a circumstance, but the computer can only take into account the factors that were programmed in when the code was written.

Computers can take in sensor input and follow programs quickly. The question is how this compares to human capabilities. More data? Better data? I'd not have side-swiped that car in the situation in question, and I bet 95% of drivers wouldn't. But yes, the big issue is that it will only do what is within the parameters of the program and/or what the programmers thought to include.

Don't get me wrong, I'm all for various assistive technologies. For example, a vehicle with the proper sensor could detect and highlight the moose about to cross the highway. Or, maybe beep at me or apply the brakes if it detects someone about to enter a crosswalk in my blind-spot, etc. But, that's different from what we're talking about here.

BTW, I just ran across a good article on AI today:
We don't understand AI because we don't understand intelligence https://www.engadget.com/2016/08/15/technological-singularity-problems-brain-mind/

Nonsense. On modern commercial planes autopilot can take care of pretty much everything except taxi to the gate. Are you aware that in low visibility autopilot's auto landing must be used. Compare this to Tesla's "autopilot" which can't be used in any remotely challenging conditions. ... Autopilot on Tesla is hack job at best.

Maybe, maybe not. If a plane's 'autopilot' had to deal with what the Tesla does, I'll bet it would fare far worse. The difference is that given what it needs to do, the plane's auto-pilot works quite well. The Tesla's doesn't. That's why the Tesla's system is a hack-job. It's completely incapable of the task that drivers will inevitably try to use it for.

I understand what you're saying, but there's a common tendency to attribute massive importance to novel risks - ones that have received recent media attention, or seem unfamiliar - all out of proportion to how those risks compare to risks that people unknowingly/unthinkingly accept in day-to-day life. When you visit someone's house, do you inquire/inspect, upon arrival, to make sure their kitchen knives are properly locked up, to prevent possible injury? Your odds of someone in the house going nuts and stabbing you or someone else are right up there with the odds of the car going rogue and running someone over.

No, it's because I am familiar with the limitations of the tech, and the over-enthusiasm of futurists and media, and the general misconceptions of the public surrounding AI and autonomous stuff, that I'm concerned.

And, my trust factor of going to a house with knives depends on my trust factor of the human agents involved. If the knives were flying around the house - cutting up food when detected - based on some Apple or Microsoft code, I'd not go anywhere near that house!

True - and, long-term, self-driving cars have the potential to be much safer than human drivers.

Maybe... IF the entire system is automated, and IF sensor technology improves enough to overcome obstacles such as nature's unpredictability, etc.... and, IF the software becomes refined enough to take into account enough of the potential situations that might arise. I'd still rather not be walking near where one is 'driving', just in case.
 
And, you trust that everyone out there driving around with one of these things has done this? Do you read all those notices when you install software or sign up for services?

This is a vehicle, hurtling down the road (hopefully staying on the road) at high speeds, which weighs thousands of pounds. It's not some recipe software.

I'm not worried that some Darwin-awards-candidate will fail to read it and take him/her-self out.... I'm worried that family, friends, myself, or any non-Darwin-awards-candidate might be driving on the same road!



I could care less about term definitions and translations. What does the term (in whatever market) mean to the general audience that might purchase and drive it.



We're just going to hope there aren't too many more accidents and deaths? How about actually trying to understand the state of the technology and actual limitations - and adjusting the laws and what these companies can do - instead of just hoping. Sheesh!

Yes, sensors will improve... programmers will think of more situations to include... computers will get faster (allowing more analysis). This will make it appear more and more 'competent' to the uninformed. But, it's never going to be what many futurists think of as 'self-driving' or AI or whatever. That's a qualitative problem, not a quantitative one.



I guess the owner/driver/summoner is. Just hope you're not the pedestrian!



And, I've seen people 'testing' this in parking lots on YouTube videos, so people haven't even been obeying the private property limitations. Whether it will detect them or not depends on the sensors and what it's programmed to detect. From watching YouTube videos, I doubt it would. It seems insanely primitive. It could do that much better in the future, I suppose. But even then, if it encounters a situation the programmers didn't think of... oops!



A lot of good that will do after you or your kid are flattened by one.
People you describe are why we can't have nice things. Teach your kids situational awareness it could save their lives.
 
Apple's rumored car? And by rumored, it's pretty much confirmed considering they've been poaching engineers from car companies and leasing/buying large factory like buildings. Apple will presumably take a stab at auto pilot since it seems to be the upcoming hot feature.

Nobody has any idea what Apple is doing. Car? Autopilot tech? Something completely crazy we haven't thought of? Not to mention this article is complete horse ****. Tesla said the omission was a mistake. But page clicks = $$$.
 
Nobody has any idea what Apple is doing. Car? Autopilot tech? Something completely crazy we haven't thought of? Not to mention this article is complete horse ****. Tesla said the omission was a mistake. But page clicks = $$$.
It's still relevant to the industry that Apple has been rumored to be interested in. This site does have "rumor" right on its name. As for the article, whether or not something happened because of this or that still makes it newsworthy as it did in fact happen and did in fact create or add on to a discussion in the industry.
 
  • Like
Reactions: CarlJ
Nobody has any idea what Apple is doing. Car? Autopilot tech? Something completely crazy we haven't thought of? Not to mention this article is complete horse ****. Tesla said the omission was a mistake. But page clicks = $$$.
Ah, I see. We have a Tesla lover here given your profile. Not going to take the bait on this one.
 
Maybe... IF the entire system is automated, and IF sensor technology improves enough to overcome obstacles such as nature's unpredictability, etc....

Remember they only have to do better than the typical human driver (who is far from infallible) unless society holds self-driving cars to unfeasibly high standards.

However, a few highly-publicised and avoidable tragedies caused by a rush to be first-to-market with self-driving cars could result in just that.
 
Why don't you go watch the video of it? I sometimes wonder if the fans of this stuff even have a grasp on how immature the tech really is. Obviously, one guy was naive enough about it to lose his life/head, and he was supposedly a futurist, tech type person.
Ah, I didn't watch the video on the Reuters site because it requires Flash (sigh), and hadn't imagined that they had actual video of the incident (nothing else mentioned such). Yeah, that was definitely "driver stupid" - any alert driver could have avoided it and more resilient software should have as well. I'd venture a guess the software placed far too high a value on staying in its lane, a judgement call even a relatively inexperienced human driver would have gotten right. Don't mistake me for someone who leaps blindly with open arms to any new tech - I've watched for many decades people excitedly show off tech that is a neat idea but doesn't actually work*. I'm fascinated by new tech, but it doesn't get a free pass on reliability merely because it's new - it has to really work. Driverless cars are in their infancy, and nowhere near ready for exposure to the public, and won't be for a long time.

*: (for years - decades - people were totally enthusiastic about voice recognition - I always said, "let me know when it can recognize speech as well as the computer on the bridge of the Enterprise" - with Siri and the others of its generation we're just finally getting almost sorta kinda to the very start of acceptability; I can tell Siri "set an alarm for 7am" and she's _very_ good about simple things like that, enough to mostly trust; but she still gets a lot of more complicated things absurdly wrong.)
Absolutely different. A human will use whatever judgement, training, abilities they have in that moment to assess the situation and make a decision. (That's why we should be pushing for more training, stricter laws on impaired/distracted driving, etc.) The computer, on the other hand, follows a program. If we know the sensor inputs, we'll know exactly what it will do or fail to do.
I feel a bit like you're explaining things to me that I'd just previously stated. In any case, I agree.
BTW, I just ran across a good article on AI today:
We don't understand AI because we don't understand intelligence https://www.engadget.com/2016/08/15/technological-singularity-problems-brain-mind/
Interesting article. Very Interesting Things likely will happen when we get AI to work, but I think that's quite a bit further away than 2045. But once you get a machine that is smart enough to design and build a machine that is smarter than itself... well there's some evolution there that could happen practically in the blink of an eye. Vernor Vinge wrote a number of interesting stories around the edges of "what happens when the Technological Singularity happens" that were quite fascinating.
Maybe, maybe not. If a plane's 'autopilot' had to deal with what the Tesla does, I'll bet it would fare far worse. The difference is that given what it needs to do, the plane's auto-pilot works quite well. The Tesla's doesn't. That's why the Tesla's system is a hack-job. It's completely incapable of the task that drivers will inevitably try to use it for.
A plane's autopilot is dealing with a pretty heavily controlled situation, which already puts a lot of the necessary inputs into the form of electronic data (information from air traffic control, the guidance systems airports have for landing, etc.), and the entire process is designed to minimize the number of split-second decisions needed (flight rules are designed to keep aircraft well separated). An automobile has pretty much zero relevant information available in electronic form (electronic maps, and Google/Waze will tell you a particular section of road is "slow", but nothing more useful), and has multiple split-second decisions to make every minute, based almost entirely on visually scanning the environment (it can supplement some of this with radar scanning to locate objects, but that won't help with signs and lights and identifying what a shape really is). So, yeah, "autopilot" for a car, on today's uncontrolled roads, is a much harder thing to program than an airplane autopilot.
No, it's because I am familiar with the limitations of the tech, and the over-enthusiasm of futurists and media, and the general misconceptions of the public surrounding AI and autonomous stuff, that I'm concerned.
To be fair, you are responding to something I wrote in reply to someone who was suggesting he would "box in" a car merely because it had the software in question installed. People do, in fact, often react out of proportion to the actual risk presented, when they meet up with something new to them. I wasn't saying don't be concerned, I was saying don't go running for the torches and pitchforks just because something is new, without first objectively comparing the risks it presents against risks you're already accepting every day.
Maybe... IF the entire system is automated, and IF sensor technology improves enough to overcome obstacles such as nature's unpredictability, etc.... and, IF the software becomes refined enough to take into account enough of the potential situations that might arise. I'd still rather not be walking near where one is 'driving', just in case.
The reasonable first start (aside from things like blind-spot warning radar that simply alerts you to things you likely missed), would probably be specific roadways where "driverless" cars were not just allowed, but required - the middle 3/4ths of a long commute, where the special roadway has all the necessary embedded markers for cars to track on, and the cars are in constant communication with each other, to ensure things run fast, safe, and trouble-free, and then control is returned to the driver (with lots of warning ahead of time) once they get to the other end and merge back into normal traffic. I can see something like that, perhaps 15 years down the line. But computers taking over for drivers on uncontrolled city streets? Yeah, no. Not for a very long time. A lot of overenthusiastic people (and car companies) will be pushing for that, but it's going to be a bad idea for a lot longer than they'll want to wait.
[doublepost=1471343502][/doublepost]
Remember they only have to do better than the typical human driver (who is far from infallible) unless society holds self-driving cars to unfeasibly high standards.

However, a few highly-publicised and avoidable tragedies caused by a rush to be first-to-market with self-driving cars could result in just that.
The problem we'll run into is, the cars are inevitably going to end up causing some (probably widely reported, possibly horrible) accidents by making mistakes that seem incredibly alien to us humans, precisely because they won't be based on human reasoning and human fallibilities. The car software could be measurably better than the average human 97 percent of the time, and avoid countless accidents that human drivers would have caused, but when something goes wrong and the software, say, calculates that the most direct route somewhere runs through a school playground at recess (or, say, in the opposite direction, a car goes out of its way to do something that kills its passengers, in the process of avoiding injuring other people), folks won't think of all the lives that were saved by all the accidents avoided, they'll run for their torches and pitchforks to banish the demon cars. Interesting times ahead. And yeah, I totally agree about the "rush to be first-to-market" bit.
 
Last edited:
Ah, I see. We have a Tesla lover here given your profile. Not going to take the bait on this one.

What bait? I like Apple and I want them to do a car. I just think this site posts too much irrelevant info. It's annoying.
[doublepost=1471370183][/doublepost]
It's still relevant to the industry that Apple has been rumored to be interested in. This site does have "rumor" right on its name. As for the article, whether or not something happened because of this or that still makes it newsworthy as it did in fact happen and did in fact create or add on to a discussion in the industry.

You're reaching. I know what this site is but the story, which is total click bait, isn't even true, and has nothing to do with Apple. It's annoying enough to see Samsung articles everywhere. You could literally post any tesla related article here then. That doesn't make sense and I think you would agree.
 
What bait? I like Apple and I want them to do a car. I just think this site posts too much irrelevant info. It's annoying.
[doublepost=1471370183][/doublepost]

You're reaching. I know what this site is but the story, which is total click bait, isn't even true, and has nothing to do with Apple. It's annoying enough to see Samsung articles everywhere. You could literally post any tesla related article here then. That doesn't make sense and I think you would agree.
There's indeed quite a bit of reaching (and deflecting) going on, it's just not happening no my (and other posters') side of things. Relevant things are relevant, they might be of interest to everyone, but that's nothing new or stange--people can simply skip things that aren't of interest to them (for whatever reason), as people have been doing in all kinds of aspects in life for ages.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.