Interesting. Me thinks it would be far easier to design roads specifically for autonomous vehicles. Imagine smart roads which collect data about traffic, pedestrians, hazards, lane markings, cycle lanes, crossings etc etc. And feeds it into all the vehicles on the road.
There's been a lot of work done looking at these kinds of things. Hyper accurate maps, vehicle to vehicle communications, vehicle to infrastructure communications, etc. I'm far from an expert, but I don't think this is a practical solution.
The first and most obvious: expense. This would be an enormous taxpayer expense and most places have a hard enough time securing funding to keep current roads well paved. Building an entire secondary infrastructure would be a budget buster.
Then there's the chicken-and-egg of it. Nobody will build such a road if there are no vehicles to drive on it, and nobody will buy a vehicle that they can't drive anywhere. Few people will buy a vehicle they can't drive
everywhere.
And would every road need such a makeover? If only main thoroughfares were instrumented, it would preclude the kind of "driverless" model that Waymo is promoting where there's not even a steering wheel in the car.
The sensor density required to monitor bicycles and pedestrians at all places at all times along the road would be pretty extreme. How would sensor failures be handled-- assuming the fault could even be detected? Would we bar autonomous traffic that needed to pass through an area with a faulty sensor? If the repair rate was anything like it is for broken streetlights and potholes, then the roads wouldn't be getting much use in the end.
There has been a lot of effort to build hyper-accurate road maps where the lanes are carefully mapped, the markings, the signage, the crossings, etc and all is kept in a database available to the vehicle. There's definitely an advantage to this context as we all know from our own commutes. You make better decisions in familiar surroundings-- but you also can make bad assumptions and worse decisions when that familiar surrounding changes. Every time I drive through a construction zone or by an accident scene I'm reminded what a bad strategy this is. When a car goes sideways, there's no mechanism to tell the rest of the roadway of the hazard. This probably falls in the category of "useful if weighted properly", but it's an enormous effort to collect and maintain.
What makes the current traffic infrastructure work is that each vehicle is piloted by a very adaptable, clever human. We're very good at making decisions based on local information even when that information is contradictory and changing. You can take a human and drop them in an entirely different vehicle, in a different country, driving on the opposite side of the road, in entirely different terrain and we adapt remarkably quickly.
To make autonomous vehicles work, I think they need to rely just as heavily on local information. There's no reason to think they won't eventually be able to. When you think that most people driving down this road are basically navigating with a flashlight, a bit of peripheral vision, and common sense it's amazing what we can do with such limited information.
Autonomous vehicles, on the other hand, have the advantages of 360° visibility and the ability to see beyond the visible spectrum and in the dark. They know where they are to a centimeter, and precisely how fast they're going. They are able to measure distances to other objects to within a few centimeters and closure rates with great accuracy. There's the potential to share information of state and intent by radio between objects and the computation power to calculate the physics with a level of precision no human could dream of, all without being victim to the distractions of screaming kids in the back seat, a bad day at work, and looking for that french fry that fell between the seat and console while changing tracks on the stereo.
So, while more sensor data (such as from the road) is always helpful, we're already well beyond what a human uses to make decisions. The hard part here is obviously being able to interpret the data, perceive the scene, and make good decisions in ambiguous situations. This is much, much harder than it sounds and is the reason we're still so far from a truly reliable technology.
Even once we're able to do that, there's a whole other layer of ethics that needs to be sorted, as has been alluded to earlier in this thread. I think the current state of the art is still having trouble at the perception level, but once that gets locked down there's still going to be the question of "kill the pedestrian or kill the driver". I don't think the social expectations have been sorted out for that yet, which means this is all being coded with unclear goals. That's the worst of all worlds, because in some cases indecision means the wrong people gets killed, but in some it likely means they all do.