No. I brought up that morality is not taught in driving school to point out that morality is not considered relevant to driving.
...
Back in my post I said self-driving cars just need to be safer than human-driven ones.
Morality is built-in to humans, though it certainly wouldn't hurt to reinforce it in driving school! (And, maybe more than once near the beginning of one's life.) If I had to venture a guess, I think we could eliminate the majority of vehicle related deaths with regular driver training and stricter law enforcement/penalties for bad driving practices.
But, morality is not relevant to driving? Give me a break! If there are any laws associated with it, then morality is involved. Why do you think there is a speed limit, or why not drive drunk?
Ah, I see. If only I would "think about it more" then I would naturally come to your conclusion that the problem is completely unsolvable. The basic flaw in your argument is how Google or Tesla are approaching this problem today hardly matters for the future.
No, I'm just saying I recognize a few things about the challenge. I started my career in electronic engineering. I've spent most of my career in computer science. I spent a good bit of time in data operations (i.e.: systems automation), and even worked for an industrial design firm for a while. I'm a car and driving enthusiast, and have some grad-school education in philosophy of mind. That doesn't make me an expert, but does give me a bit more insight than the average person (and likely a broader view than most people working in AI).
And, while I'm not a fortune teller, technology-advancement isn't magic either. Technology won't ever do something that can't be done.
It's also important to understand what I'm saying is a huge challenge, vs what can't be done. I'm admitting that AI tech is going to advance by leaps and bounds. But, it's never going to actually think and reason like we do. You're saying that won't matter to driving, I think, while I'm saying it ultimately will (unless what we think of as driving is radically changed).
The most important fact to keep in mind is that human beings do not carry all the potentially necessary information to complete any complex task in our brains. What we lack in the ability to know everything is compensated for by sensory and reasoning abilities. Yet we are demonstrably very dangerous behind the wheel of a car because of the inherent limits to our knowledge, sensory, and reasoning abilities, as well as highly imperfect physical abilities (all of which vary greatly from individual to individual). Of these, save reasoning, it is easy to see how computation systems could be an improvement. It's also possible to imagine how computational reasoning, within the limited bounds of the problem, can also be better.
And, for the most part, a computer is pretty limited as well. They certainly can't know everything. It would be an incredible challenge to match a human's sensory input, because much that is based on sensory interpretation. A computer can take 50 megapixel images all day, and it's quite pointless, even compared to a person with poor vision, if the image can't be interpreted.
Sure, a computer could have a radar sensor that might detect something a human couldn't, and react more quickly IF the proper interpretation of the data, and appropriate reaction, is built into the program. And, for a particular task, that's relatively easy, and a computer will often beat a human.
But, once we start layering and combining situations, it gets complex really quickly. The amount of data a human driver is taking in, analyzing, and reacting to is many orders of magnitude more than the best AI system is currently working with. That doesn't mean that in a very specific situation, the computer might do better. I'm all for *assistive* technology!
Reasoning may be the tougher nut to crack, but then again, it's also easy to see how an automobile's AI would not have to entirely duplicate human reasoning to become net superior to cars piloted entirely by humans. The key is fuzzy logic, one of the main areas of AI research.
This is where I disagree. There's no magic going on here, it's just a more sophisticated form of look-up and branching. When a situation outside the parameters appears, a human could actually figure it out, a computer can't. Again, I'm not saying a human does this perfectly, or sometimes as quickly as necessary to avoid an accident... but the computer won't either, as it's not even possible.
The whole thing about automated 'driving' is that it has to be greatly constrained. Like I think I said in an earlier post... IF all the vehicles communicated (best automated), and IF the roadways are well enough mapped/constrained, and IF the non-predictable obstacles can be controlled or eliminated from the situation well enough, and IF there's enough sensor-tech to overcome things like weather, then it might work fairly well. But, that isn't reality, nor is it driving in a comparable way to what humans currently do.
Could we do the above? Sure, with enough changes and trade-offs to current driving, and a ton of infrastructure change. I used to ride the automate train in Vancouver daily, and it worked pretty well. If that's where the future of cars go, it would suck IMO, but I guess it would work. If the goal is to save lives, though, I'd rather just work on training and eliminating the bad drivers from the road that cause the majority of the accidents.
(For example, I've driven hundreds of thousands of miles, and have only caused one very minor accident when I first started driving, and have been only been in one accident that was a true accident due to extreme weather conditions... the rest have all been the other party's fault and, from what I know, could have been avoided with some training and/or the other driver not doing something they weren't supposed to be doing. Or, to put it another way, the main problem isn't humans lack of capabilities to drive well/safely.... it's our ability to decide to do what we're not supposed to be doing.)
I am certain it will be figured out over time. Cars will continue to get smarter in the way they assist human drivers (already much improved only in the last few years). The problem as you've illustrated it is getting humans to accept that maybe they aren't as good at something as a computer could be. That attitude simply illustrates the limits of human reasoning, not the strengths of it.
Cars won't get smarter, they don't think or learn. Programmers will think of ways of employing technology, sensors, and computers in the process of assisting humans, or attempting to 'drive', yes.
My goal is to get humans to realize the strengths and weaknesses of both humans and AI, and properly implement them in reality, instead of the sci-fantasy stuff in the movies, or the dreams of 'futurists' like Musk, Kurzweil, etc.
If a balanced revenue stream was something Wall St. really cared about, why is Google such a Wall St. darling?
Wall Street is an attempt at a reputable casino, anymore. It's about futures speculation, not investment. It's a huge problem to economic stability, but certainly shouldn't be looked at as any kind of indicator of how well a company is or isn't doing.