Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm not sure why Apple thinks that vast numbers of people want to buy an Apple car. Ford, GM, Honda and of course, Tesla are already making electric cars and have 1 or 2 generations of their own vehicles already in the marketplace. What is Apple going to bring that is an improvement over these vehicles and concepts? The 3 big problems with electric cars: cost, range, and (lack of) charging stations, may be solvable with time but that means a lengthy time to profit while battery technology is improved, costs lowered, and charging stations become available. You might want to add a 4th problem to that equation-charging time. The GM Bolt is supposed to have a 20 minute charge time to reach an 80% charge, but what would that do to battery life, and is that charger a standard run of the mill version that you can find them anywhere or will it be a special charger that is only at some locations? There are a LOT of problems to be solved with electric cars. Over time, they probably will be, but electric cars are probably going to be a money pit for at least 5 years and probably more. For everyone making them.

I have a Toyota that has the Entune entertainment system. Toyota makes some of the most reliable and fuel efficient cars in the world right now but it doesn't know how to design, program and implement a good entertainment system. I think that Apple may find it can make an entertainment/info system, but has a lot to learn about how to make a car.
 
  • Like
Reactions: Demo Kit
An AV will always be able to analyze, interpret, and make an informed decision faster than a human driver.
And a human driver will always outsmart a computer in real world situations. A program can beat a human chess player, because every chess position is fully describable by a few coordinates and the next move is predictable by a fixed set of rules. But what happens on the street is pure chaos and you need actual human intelligence to survive. The only thing a computer can do in unforeseen situations is to step on the brakes, which is why there are so many rear-end accidents with self driving cars. And this will never change, because artificial intelligence is a myth.
 
And a human driver will always outsmart a computer in real world situations. A program can beat a human chess player, because every chess position is fully describable by a few coordinates and the next move is predictable by a fixed set of rules. But what happens on the street is pure chaos and you need actual human intelligence to survive. The only thing a computer can do in unforeseen situations is to step on the brakes, which is why there are so many rear-end accidents with self driving cars. And this will never change, because artificial intelligence is a myth.

Not so. Introduce yourself to the concept of fuzzy logic.
 
QNX and Blackberry? Seems like an unusual hire seeing what a poor and outdated user experience those platforms have provided. They are always behind the curve.
 
Self-driving cars on public roads will never meet basic safety requirements.
Man will never fly. Man will never walk on the moon. Traveling around the world takes at least 80 days. Earth is flat.

And a human driver will always outsmart a computer in real world situations.
The big majority of human drivers is already overtaxed with reacting to unexpected events during driving under normal conditions - let alone extreme conditions. Machines have far better sensors and can much faster recognize a critical situation and start preventive actions earlier and much better dosed. Best example: ESP.

what happens on the street is pure chaos and you need actual human intelligence to survive.
It looks like chaos to a human, because a human being is very limited in processing multiple sensoric information at the same time. btw: Your video just proves that the human driver is the primary cause for accidents.

The only thing a computer can do in unforeseen situations is to step on the brakes, which is why there are so many rear-end accidents with self driving cars. And this will never change, because artificial intelligence is a myth.
You may want to update your knowledge about already available assistance systems and their capabilities. And while you're at it, take a look at the research being done and the pretty advanced state of autonomous driving already today.
 
Now that Timmy has revealed his true love for Hillary and the Illuminati, self driving cars are a great way to suppress and control the general public. The uber wealthy will sit back and laugh at these "people movers".
 
The entire concept of an Apple Car is so stupid, unless you literally have Honda or Toyota or someone design the entire thing, except maybe the interior and electronics which Apple can design. People buy reliable cars, cars that are meant to last 300,000+ miles. If not to keep them for that long but to have higher trade-in values. I don't see an Apple Car coming close to this without massive research and development teams. You can't compete with car companies that already exist in this area easily.
 
It is unlikely that Apple will ever market a car. How many gold Watches did they at $10,000 each? Now they are going to make a car at $40,000+?? Highly unlikely.

Apple is about a good product they can sell millions of, and have a good earnings margin. An "Apple Car" is way out of Apple general business model. They are either looking at enhancing current automobile technology, and/or autonomous driving.
 
Not so. Introduce yourself to the concept of fuzzy logic.
Can fuzzy logic make ethical decisions or value judgements? Honest question.
For instance, can it determine to swerve a car away from a human being vs a deer on the road?
Or decide to not apply the brakes and sacrifice the driver's life in order to save a schoolbus full of children?
 
  • Like
Reactions: Demo Kit
Can fuzzy logic make ethical decisions or value judgements? Honest question.
For instance, can it determine to swerve a car away from a human being vs a deer on the road?
Or decide to not apply the brakes and sacrifice the driver's life in order to save a schoolbus full of children?

I'm no AI expert by any means, but from my reading on this subject, fuzzy logic is the method used for machine imitation of precisely this kind of human reasoning. Understand I'm not making an argument in favor of fully autonomous cars. The technology probably won't be here for some time and in the interim we'll be seeing mostly better and smarter augmentation. But I am also aware of the AI advances that could quite readily lead to cars driven by computers being safer than nearly any human driver, and it isn't as difficult or far off as it may seem (if only because humans pretty much suck at driving).
 
Can fuzzy logic make ethical decisions or value judgements? Honest question.
For instance, can it determine to swerve a car away from a human being vs a deer on the road?
Or decide to not apply the brakes and sacrifice the driver's life in order to save a schoolbus full of children?

Once objects are recognized, the fuzzy logic boils down to value assignments and integer arithmetic, same as a human would use. It's the value assignments given to the car that will be interesting. You should be able to key in the number of stranger's lives that are equal to one close relative...
 
And a human driver will always outsmart a computer in real world situations. A program can beat a human chess player, because every chess position is fully describable by a few coordinates and the next move is predictable by a fixed set of rules. But what happens on the street is pure chaos and you need actual human intelligence to survive. The only thing a computer can do in unforeseen situations is to step on the brakes, which is why there are so many rear-end accidents with self driving cars. And this will never change, because artificial intelligence is a myth.

The term "artificial intelligence" has a very particular meaning in computer science. It's not referring to "AI" as self-aware sentience, as popular culture tends to think of it as (also referred to as artificial general intelligence). There are a lot of definitions, but I like Wikipedia's rather simple one: "the term 'artificial intelligence' is applied when a machine mimics 'cognitive' functions that humans associate with other human minds, such as 'learning' and 'problem solving'."

Artificial general intelligence (AGI) may or may not be possible. It's entirely likely that we may wind up with systems combining AI research and machine learning that effectively mimic many of the hypothesized capabilities of AGI without actually meeting the general standards of AGI (consciousness, self-awareness, etc.). But that's largely philosophical (literally, as there's a lot of articles on the subject in various philosophy journals), and completely irrelevant to the discussion.

As it stands, AI research has already been validated on literally countless occasions. Machine learning algorithms and technology have had a profound effect on almost every industry. Search engines, optical character recognition, finance, genetics and medical research, video games, etc. The list is long and varied. There's nothing mythical about it, because we aren't talking about AGI. That's a long-term, hypothetical possibility that's largely academic.

Now, as for chess: yes, there are rules. That doesn't make chess simple within the confines of those rules. It's not. But AI research has long since surpassed the challenges of chess. Just earlier this year, Google's DeepMind project's AlphaGo beat Lee Sodol, arguably one of the world's greatest Go players, in four out of five matches. That's a remarkable achievement considering how complex Go is: in chess, there are approximately 20 possible moves per turn; in Go, that number jumps to over 200 per turn [0]. There are more possible moves and outcomes as an upper bound than there are atoms in the universe [1]. The sheer scale of the difficulty, both in terms of the math and the application of it, made this an outcome that most computer scientists didn't really expect. The difference between even a world-class player and someone like Lee is profound.

The streets can be chaotic. No one would deny that. But this also highlights a fundamental difference in how a human perceives a scene compared to how a computer does using DNNs and other techniques. Very simply put (perhaps too simply), when you and I are out driving, we tend to see things as a whole and start to pick up on smaller details from there as we focus on them (something is shifting into our vision or movement catches our eye, something changes from one moment to the next, etc.). AVs kind of do the opposite; they work their way up from details and see things through a sort of geometric lens. Instead of seeing what's different and protrudes into their perception like a human would (new car present!), an AV maps out everything its sensors see, classifies them, and uses its knowledge base to make ongoing predictions. Those get compared to what's actually happening, and a more accurate understanding of the car's environment is the result.

You see a car cutting you off. An AV sees a particular type of vehicle that it was already tracking from the moment it was picked up by the vehicle's sensors, with a host of specific characteristics (mass, acceleration, braking distance, etc.), moving on a particular trajectory compared to your current one relative to the road surface (width, spacing, berms, etc.) and other traffic. You slam on your brake out of instinct; an AV adjusts it in a specific, controlled manner based on the available data and the car's handling characteristics. One is incredibly more precise.

The 'chaos' is broken down into its component parts. Those parts are broken down further and analyzed to determine what affect (if any) they might have on the vehicle's route and actions as a complete (and constantly updated) picture is built up. This video (0:29; also, this one) shows how self-driving cars 'see' the world around them. Note how all objects are identified and tracked. That's happening at all times, and these videos only represent how objects are identified. They don't show what happens after that identification, but suffice to say, that's just the beginning.

By contrast, human beings miss things. We get tired. On long distance drives, we often dissociate as a means of protecting ourselves from the tedious boredom (popularly referred to as "highway hypnosis"). Your mistake is presuming that there's something inherently unique to the human mind that's necessary for driving because we've had no alternatives until very recently.

Now, as for rear-end collisions, you're sort of right that in the beginning AVs occasionally behaved in a manner that differed from most human drivers in specific circumstances. The cars were, in a sense, programmed to be extra-cautious. Think about new teenage drivers, and how they'll usually be more cautious at stop signs, etc. Since then, Google has worked on adjusting behavior to more accurately simulate how a human might behave even though it doesn't represent the *best* approach. And that's the real beauty of self-driving cars: their algorithms are continuously refined based on new data. These changes aren't happening because of a rash of accidents, however; they're happening because the data shows that it'd further minimize the chance of a problem with other drivers based on simulations and projections.

If you look at Google's monthly reports, you'll note that the clear majority of rear-end collisions were not in any way Google's fault nor were they the result of "[stepping] on the brakes." The most common examples were sitting at a red light, and being bumped from behind and a car, already stopped, rolling forward. And we're talking about, literally, a handful of incidents across a fleet of over 58 vehicles that have driven over 1.725 million miles on public roads while in autonomous mode. Finally, Google has to report *any* incident, including those at very low speeds with minimal/no damage and those that take place while in manual mode. Those sorts of incidents are notoriously under-reported but represent ~55% of all vehicle accidents according to the NHTSA. So even though the total number of incidents is absurdly low, it's very easy to make a big stink over what are literally non-issues. Rear-end collision can encompass a whole host of different types of accidents. Overall, there's just one incident--and it was little more than getting sideswiped by a bus at a very low speed--that was partially Google's fault. That's an incredibly impressive track record.

And while an AV can always "step on the brakes," remember that the car already knows what's behind it as well as what's in front. It can make predictions about the effects of a possible action on other drivers well before it ever actually takes action. IF an action is going to result in getting rear-ended, and the AV still chooses to take it, then it'd be because it was the last possible action that could be taken with all others being worse. By contrast, researchers have clearly shown that human drivers have an overwhelming instinct to slam on the brakes when something goes wrong. Even when doing so will result in an even more dangerous situation. But one of the goals of an AV is to ensure that the vehicle takes actions well before hand to avoid ever getting into that sort of situation in the first place.

The concept of a driverless car has already been proven beyond any reasonable doubt. At this point, it's just a matter of putting in the time and effort to refine the work and further expand their capabilities. A big project, no doubt, but one that will be handled fully in time.

0. http://www.scientificamerican.com/article/computer-beats-go-champion-for-first-time/
1. http://senseis.xmp.net/?NumberOfPossibleGoGames
[doublepost=1470084298][/doublepost]
Can fuzzy logic make ethical decisions or value judgements? Honest question.
For instance, can it determine to swerve a car away from a human being vs a deer on the road?
Or decide to not apply the brakes and sacrifice the driver's life in order to save a schoolbus full of children?

A car makes whatever decisions it's programmed to make based on its algorithms, sensor data, and the situation.

You're more or less just reformulating the classic Trolley problem. The problem with that is that it's an ethics thought experiment, and not an engineering problem. By design, the problem is reduced to an absurdly simplistic dilemma: one person or many. It assumes a binary choice, without regard for likely real-world alternatives.

One of the purposes of a self-driving car is to avoid situations like that altogether. Knowing where you are, where everything else is, and being able to run simulations based on that data as the car drives forward is about being able to be proactive and avoid potential problems before they become actual ones. And should a catastrophic accident be unavoidable, self-driving cars will always be more capable of analyze options and simulating potential actions in order to determine the least-problematic choice available. By contrast, human beings are quite limited: maybe you'll be able to swerve, jam on the brakes and lower the level of kinetic energy involved in impact, but you're pretty much just acting out of instinct. They're not informed choices on your part, unless you have more time available to act. In which case, the likelihood of an AV not being able to take some action to avoid the accident altogether steadily drops towards zero as available time increases.

Yes, a self-driving car can recognize the presence of both a human being and a deer on the road. It will do so from significantly further away than even the most alert driver with the clearest of vision (not to mention faster). Yes, it can differentiate between the two. Even a few years ago, BMW was showing off a headlight system in Europe that would use a spotlight to highlight deer, other animals, humans, etc. that might be near to or actively crossing in front of the vehicle. Yes, a self-driving car can recognize a school bus and apply additional rules to its presence. They already do so.

Self-driving cars minimize the likelihood of the Trolley problem ever actually becoming an actual dilemma. Making the car choose to sacrifice the driver to save a busload of school children sounds nice in these discussions, but the better solution is to ensure that the car never has to make that decision in the first place. After all, can anyone point to an actual real-world scenario where a driver had to make a decision like that? More importantly, one that didn't involve the driver being an idiot? It's little more than a hypothetical situation.

People get worked up about the ethics of self-driving cars, but you can program whatever ethical perspective you want into the car. Because of everything else, it'd likely remain little more than a simple series of functions that almost never get called in a real-world scenario. Not because the car is inhuman or doesn't have a conscience, but because the car is capable of monitoring its surroundings and all the variables around it to avoid the situation altogether.
 
Err I think you'll find that Tesla IS a successful car company. Yes they may be burning though cash at the mount like an ageing rock star goes through young blondes however that is merely them putting cash back into the business to build up capacity and to also bud out the supercharger stations etc. This is smart of Tesla because while ICE makers are reluctantly releasing a few EVs that are merely compliance vehicles Tesla is building up it's business and more importantly it's charging system.
I however would like Apple to be successful with the Apple car(if indeed they are making one), though a part of me worries that Tim Cook has not got the ability and determination that Steve Jobs had. I am sure that Tim Cook is a clever and capable guy in many areas however Apple needs to stop worrying about shareholders and creating new products for new areas to wow them and instead simply focus on it's core products such as the iPhone and the Mac which is lagging behind competitors.
One thing though seems to be sure which is that any new car EV or ICE should not be autonomous until their safety rating is at least 99%. There are too many idiots on the roads today who think that just because they are able to drive that they can drive(and are good at it). So therefore any autonomous system that gets implemented widely needs to make things better not stay the same or get worse.
Tesla's Autopilot is technically a good system though it keeps getting abused by these idiots who think that they can go to sleep in the back seat of their car or watch a harry potter film. So until any company can figure a way to make a fully autonomous system that stops these idiots from fudging things up and causing accidents Apple(and others) should simply stick to old fashioned cars with old fashioned ideas such as a driver ACTUALLY driving the car and hopefully safely

No, Tesla is aspiring to be a successful company. They have been in business for over 13 years now and are still hemorrhaging money. They will never sell enough cars to be able to make back the investors cash. That's why they are now dumping even more money in Solar City. Next, they will need to be a supplier of battery packs for other EV manufactures with the plant they just built.

I'm not rooting against them, I'm rooting for them, I have a Model 3 on order. But I'm not running form the facts, they are on the edge and the Model 3 and the battery packs are the only way they can dig themselves out. But I don't know if they have the experience or capability to manufacture on time and reliably. Yet.
 
  • Like
Reactions: Weaselboy
For reasons associated with the fact that this thread, about Apple getting into the car business, exists.

You have clearly stated the basic fallacy, possibly without noticing. We already know Apple is working on an automotive product, because it's available now and is called CarPlay. What they may be doing beyond that is completely unknown, and the evidence to support the conjecture that their future automotive product is a branded car is essentially nonexistant.
 
[doublepost=1470242329][/doublepost]
And a human driver will always outsmart a computer in real world situations. A program can beat a human chess player, because every chess position is fully describable by a few coordinates and the next move is predictable by a fixed set of rules. But what happens on the street is pure chaos and you need actual human intelligence to survive. The only thing a computer can do in unforeseen situations is to step on the brakes, which is why there are so many rear-end accidents with self driving cars. And this will never change, because artificial intelligence is a myth.


Nonsense. Have you ever driven on public roads? Chaos as it is. So many bad drivers due to distraction, poor driving ability/incompetence, thrill-seekers, advanced age, slow reaction time, inexperience, mental illness, intoxication, you name it I often hope for more computer-driven cars on the road.
 
Last edited:
Nonsense. Have you ever driven on public roads? Chaos as it is. So many poor drivers due to distraction, poor driving ability/incompetence, thrill-seekers, advanced age, inexperience, mental illness, intoxication, you name it I often hope for more computer-driven cars on the road.

And that's why you need actual human intelligence to master the road. You need to have a concept of driving under alcohol and to be able to judge if another driver is just drunk or if he's constantly changing lanes for one of the other reasons. Only if you know the difference, you can adapt and foresee what's going to happen next and come through without an accident.
 
And that's why you need actual human intelligence to master the road. You need to have a concept of driving under alcohol and to be able to judge if another driver is just drunk or if he's constantly changing lanes for one of the other reasons. Only if you know the difference, you can adapt and foresee what's going to happen next and come through without an accident.

Actually, no. You don't. Humans are bad at driving, and if you think you are excepted from that description, then you are probably one of those humans who is worse than average at it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.