I'm curious as to why advanced control and measurement techniques nowadays are called AI. There's really not much intelligence included in keeping a car in a lane or avoiding obstacles. It's just programmed reactions to input from sensors.
I didn't say it was easy, just that it is not really that much "AI" included. I don't think anyone would call the system that controls your heat pump at home "AI", it just keeps the room temperature at your desired setting with regards to the outside temp, inside temp and perhaps some other variables. Reactions to sensor input according to algorithms.
The interesting part will be when the elephant in the room has to be discussed: ethics.
A self driving car might have to make very tough decisions like:
To avoid the crash going straight ahead, killing the occupants in the car, the other options are to kill the pedestrians to the right or the cyclists to the left but saving the occupants of the car.
A human driving a car on the right side of the road would probably weer to the right killing the pedestrians as a reflex, but the car with "AI" has to make a "decision".
None of the options are the obvious right one. The self driving car cannot "think", it will do what the algorithms tells it to do.
A car with "assistive systems" may help the driver in whatever decision he or she makes, but the car does not make the decision.
When the car makes the decision, is the driver no longer at fault?
-"But officer, the car ran over that old lady by itself, I was looking at Facebook on the screen".
Or will the driver be responsible for whatever actions the car takes, even if the driver has no control?
I see some very interesting court cases up ahead...