Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Luminar Technologies (LAZR) is The Ticket here. Apple just needs to acquire them and they have some state of the art technology and the people who know all about it vs selling iPads in the company store. Go Apple - Go Luminar!
 
One video shows a product being used by an actual customer. And the other video shows a product that got publicly cancelled. Nice try comparing the two though.

I hope you're not betting your puts on that media circus that'll never happen.

Anyways, see ya. 👋
Lots of confidence there from a video of a guy driving to a grocery store. Kinda like all the confidence everyone here had after seeing AirPower videos. We'll see how it does with every complexity found on the road around the world if it ever gets into the hands of the people that paid thousands of dollars for a promise.

Looks like they have their work cut out for them:

 
Last edited:
not needed for self driving. lidar doesn’t even work in heavy rain. tesla got it right, pure vision is all that is needed for FSD.
agreed. machine vision will always beat lidar as a solution for real-world driving environments.

No they didn’t. Tesla’s cameras don’t work in rain. Lidar can be made to work in rain, and even in snow, if it’s powerful enough. Passive sensors like cameras are limited. Musk just want to take the cheap way out.

and what exactly do you base this on? another rumor or facts?

Ok so u work at Tesla and are an expert at lidar huh?
lidar requires mapping and advance knowledge of the world. Machine vision can detect and adapt to novel situations, just like humans. Cameras, big data (from millions of Teslas on the road), and advanced machine learning is the only viable solution if you want true self-driving in the most possible situations. Elon was right on this.
 
One video shows a product being used by an actual customer. And the other video shows a product that got publicly cancelled. Nice try comparing the two though.

I hope you're not betting your puts on that media circus that'll never happen.

Anyways, see ya. 👋
Hey man. You are absolutely right about your stance with Tesla and it’s cameras. I’ve done some extensive research myself a month or so ago. First of all I do not own a Tesla and personally I think Musk is overrated even though he is very intelligent. But his camera system and the FDS computer vision program is the clear front runner right now. FDS gives them much more freedom and they can make changes quickly and efficiently using the years worth or data and any future data of cars already on autopilot. There’s no clear indication that LiDAR today has a competitive edge. Waymo uses LiDAR but they have to carefully map everything block by block to gather that data. That takes way too long and could potentially be a handicap. Things could change in the future and LiDAR could prove to be superior but I’m not sure if it will.

Here’s what I find interesting about Apple and LiDAR. A few years ago TechCrunch did a great deep dive on the new Apple Maps. Interviewed Eddy Cue and some others on the maps team and drivers. Eddy said something like nobody is doing this the way Apple is doing it and talking about how all these data points are used. Maybe that’s PR talk but I sort of believe him. I have a feeling that them using LiDAR for maps was killing two birds with one stone. Obviously they were rebuilding the maps from the ground up. But they were also mapping block by block all this data for self driving. They even had people walking around with LiDAR backpacks. If I’m right then they have been using Waymo’s approach for years in secret. I also have a feeling that all the cities that use Apples LookAround view in maps will be the cities that launch with any future self driving modes. Apple has the potential to have the whole country mapped when it launches their car and then they can overlay future self driving car data. So basically they have the potential to have the best of both Tesla and Waymos approach. Maybe I’m right. Maybe I’m wrong. But Apple must be onto something if they want to invest heavily into LiDAR for all its products. Apples head of AI they stole from Google John Giannandrea will do a lot of good here. Also check out videos of the start up Apple bought called Drive.AI. Check that article out too. https://techcrunch.com/2018/06/29/apple-is-rebuilding-maps-from-the-ground-up/amp/
 
Indie Semiconductor.

From Bloomberg:
The Cupertino, California-based technology giant is in active talks with a number of potential suppliers for these laser based sensors that allow a car's computer to "see" it's surroundings.
 

Attachments

  • Screen Shot 2021-02-19 at 12.40.15 PM.jpg
    Screen Shot 2021-02-19 at 12.40.15 PM.jpg
    200.5 KB · Views: 55
Last edited:
Hey man. You are absolutely right about your stance with Tesla and it’s cameras. I’ve done some extensive research myself a month or so ago. First of all I do not own a Tesla and personally I think Musk is overrated even though he is very intelligent. But his camera system and the FDS computer vision program is the clear front runner right now. FDS gives them much more freedom and they can make changes quickly and efficiently using the years worth or data and any future data of cars already on autopilot. There’s no clear indication that LiDAR today has a competitive edge. Waymo uses LiDAR but they have to carefully map everything block by block to gather that data. That takes way too long and could potentially be a handicap. Things could change in the future and LiDAR could prove to be superior but I’m not sure if it will.

Here’s what I find interesting about Apple and LiDAR. A few years ago TechCrunch did a great deep dive on the new Apple Maps. Interviewed Eddy Cue and some others on the maps team and drivers. Eddy said something like nobody is doing this the way Apple is doing it and talking about how all these data points are used. Maybe that’s PR talk but I sort of believe him. I have a feeling that them using LiDAR for maps was killing two birds with one stone. Obviously they were rebuilding the maps from the ground up. But they were also mapping block by block all this data for self driving. They even had people walking around with LiDAR backpacks. If I’m right then they have been using Waymo’s approach for years in secret. I also have a feeling that all the cities that use Apples LookAround view in maps will be the cities that launch with any future self driving modes. Apple has the potential to have the whole country mapped when it launches their car and then they can overlay future self driving car data. So basically they have the potential to have the best of both Tesla and Waymos approach. Maybe I’m right. Maybe I’m wrong. But Apple must be onto something if they want to invest heavily into LiDAR for all its products. Apples head of AI they stole from Google John Giannandrea will do a lot of good here. Also check out videos of the start up Apple bought called Drive.AI. Check that article out too. https://techcrunch.com/2018/06/29/apple-is-rebuilding-maps-from-the-ground-up/amp/
Sure.

There are benefits to having LiDAR on their cars. Their current Maps look around feature has a much better feel than Google Street View thanks to LiDAR so imagine Apple Maps LookAround being updated every day globally because all of their cars are constantly being driven around the world and collecting data. That's great. And LiDAR would accelerate training their vision stack by autolabeling features from video frames down to the centimeter level.

But for Level 5 Self Driving application, LiDAR won't be needed IMO.
 
based on video evidence
ok, thanks for the video ...
you do realize that NONE of the technologies are ready for primetime, right? they are all still in development and far from "mass production" (as in hundreds of thousands of cars) ...how it is going to play out remains to be seen, but my guess it will be a combination of multiple different sensor technologies ...
 
agreed. machine vision will always beat lidar as a solution for real-world driving environments.






lidar requires mapping and advance knowledge of the world. Machine vision can detect and adapt to novel situations, just like humans. Cameras, big data (from millions of Teslas on the road), and advanced machine learning is the only viable solution if you want true self-driving in the most possible situations. Elon was right on this.
that remains to be seen and prone, none of the technologies used are ready to be deployed across the auto industry ...they are all still in development ...
 
  • Disagree
Reactions: Si Vis Pacem
ok, thanks for the video ...
you do realize that NONE of the technologies are ready for primetime, right? they are all still in development and far from "mass production" (as in hundreds of thousands of cars) ...how it is going to play out remains to be seen, but my guess it will be a combination of multiple different sensor technologies ...
of course it's not ready for primetime. there's a big label saying BETA on the Tesla software. and it nags you to keep your hands on the wheel while FSD Autopilot is on because it simply isn't ready for all users to use. software is still in development.

multiple different sensors doesn't make sense in the long run. heavy snow/rain is going to cause lidar to fail which the car must rely on vision alone to drive. if a car can drive in heavy rain/snow with vision alone, it can do most other conditions also with vision alone.
 
  • Like
Reactions: Si Vis Pacem
Sure.

There are benefits to having LiDAR on their cars. Their current Maps look around feature has a much better feel than Google Street View thanks to LiDAR so imagine Apple Maps LookAround being updated every day globally because all of their cars are constantly being driven around the world and collecting data. That's great. And LiDAR would accelerate training their vision stack by autolabeling features from video frames down to the centimeter level.

But for Level 5 Self Driving application, LiDAR won't be needed IMO.
I agree with level 5. At level 5 we have the potential to not even need speed limits anymore (on the highway atleast) in a world that has gone completely autonomous. But for apples sake it will be necessary at the start and as it continues to what I think secretly map out whole cities for autonomy over the next several years. Everyone knows or Atleast has the impression that Apple will enter the game with the biggest disadvantage. Nero zero real world user data. Check out that article if you haven’t. The author is very heavily hinting at Apple laying all the ground work for years since 2015 to train its autonomy vision stack. I look forward to the next decade of autonomy.
 
I would posit that Apple buying up LiDAR sensors is actually for future iPhones and iPads rather than a self-driving car.
If you take advantage of it the LiDAR sensor in the 12 Pro is worth the increased cost.

Using it to scan and film things in 3D is revolutionary given the cost of professional gear. The TrueDepth camera gives good point cloud data for smaller items but with the LiDAR sensor you can scan an entire interior.
>Apple is in discussions with multiple suppliers of LiDAR sensors appropriate for a self-driving vehicle

Interesting. Why would they want to put LiDAR sensors appropriate for a self-driving vehicle into an iPhone or iPad?
 
Hey man. You are absolutely right about your stance with Tesla and it’s cameras. I’ve done some extensive research myself a month or so ago. First of all I do not own a Tesla and personally I think Musk is overrated even though he is very intelligent. But his camera system and the FDS computer vision program is the clear front runner right now. FDS gives them much more freedom and they can make changes quickly and efficiently using the years worth or data and any future data of cars already on autopilot. There’s no clear indication that LiDAR today has a competitive edge. Waymo uses LiDAR but they have to carefully map everything block by block to gather that data. That takes way too long and could potentially be a handicap. Things could change in the future and LiDAR could prove to be superior but I’m not sure if it will.

Here’s what I find interesting about Apple and LiDAR. A few years ago TechCrunch did a great deep dive on the new Apple Maps. Interviewed Eddy Cue and some others on the maps team and drivers. Eddy said something like nobody is doing this the way Apple is doing it and talking about how all these data points are used. Maybe that’s PR talk but I sort of believe him. I have a feeling that them using LiDAR for maps was killing two birds with one stone. Obviously they were rebuilding the maps from the ground up. But they were also mapping block by block all this data for self driving. They even had people walking around with LiDAR backpacks. If I’m right then they have been using Waymo’s approach for years in secret. I also have a feeling that all the cities that use Apples LookAround view in maps will be the cities that launch with any future self driving modes. Apple has the potential to have the whole country mapped when it launches their car and then they can overlay future self driving car data. So basically they have the potential to have the best of both Tesla and Waymos approach. Maybe I’m right. Maybe I’m wrong. But Apple must be onto something if they want to invest heavily into LiDAR for all its products. Apples head of AI they stole from Google John Giannandrea will do a lot of good here. Also check out videos of the start up Apple bought called Drive.AI. Check that article out too. https://techcrunch.com/2018/06/29/apple-is-rebuilding-maps-from-the-ground-up/amp/
Good take. My one criticism of your reasoning on what Apple is doing -- those LiDAR maps, to be effective with the Waymo approach for full self driving have to be 100% up to date. So say Apple did a full LiDAR map of NYC before the COVID pandemic started. It has radically changed, because there are now outdoor seats for huge numbers of restaurants encroaching into what used to be part of the street. There is high likelihood that a lot of these changes will be long term if not permanent. So, the entire NYC LiDAR map would have to be redone from scratch.

That's why Tesla is betting on computer vision. No need to constantly redo worldwide mapping. Mind you, I'm not claiming Tesla will suddenly be ready for Level 5 FSD to be released -- it still may take a long while -- but if I had to bet, I wouldn't pick LiDAR
 
Good take. My one criticism of your reasoning on what Apple is doing -- those LiDAR maps, to be effective with the Waymo approach for full self driving have to be 100% up to date. So say Apple did a full LiDAR map of NYC before the COVID pandemic started. It has radically changed, because there are now outdoor seats for huge numbers of restaurants encroaching into what used to be part of the street. There is high likelihood that a lot of these changes will be long term if not permanent. So, the entire NYC LiDAR map would have to be redone from scratch.

That's why Tesla is betting on computer vision. No need to constantly redo worldwide mapping. Mind you, I'm not claiming Tesla will suddenly be ready for Level 5 FSD to be released -- it still may take a long while -- but if I had to bet, I wouldn't pick LiDAR
Agree 100%. That’s another downfall of LiDAR is the remapping. But for Apple it will be necessary due to a lack of actual user reporting autonomous data at launch. I’ll be interested in how they dig themselves out of that hole when the Apple car launches but I think they may have a good foundation for that. Check out that TechCrunch article about Apple Maps from 2018. Very interesting about laying the possible foundation of autonomy data and some of what they are doing there sounds similar to what FDS does on the back end. Here’s something I wish I knew more about...in 2015 Apple invested 1 billion into Didi Chuxing (Uber of China and other parts of the world). They have “550 million users and tens of millions of drivers” Didi has a lot of maps data from ride sharing but they also have projects being developed and deployed for autonomous cars. As part of that investment does Apple get access to autonomy data? Gurman needs to to some investigating on that. I’d like to know.
 
And Ultrasonics (which likely don't work in Rain), and Radar. Tesla doesn't just rely on Cameras.
Ultrasonics has very little to do with autonomous driving for the practical reason that its range is very short. It is great for navigating tight spots at very slow pace, but is of no use when the vehicle is actually moving in traffic. Radars, LIDARs and VIS/NIR cameras are the way to go.
 
agreed. machine vision will always beat lidar as a solution for real-world driving environments.

lidar requires mapping and advance knowledge of the world. Machine vision can detect and adapt to novel situations, just like humans. Cameras, big data (from millions of Teslas on the road), and advanced machine learning is the only viable solution if you want true self-driving in the most possible situations. Elon was right on this.
I think this is based on some assumptions which are not necessarily entirely accurate.

LIDARs and computer vision are two complementary technologies. They have different strengths and weaknesses.

Computer vision is a passive technology. It is possible to draw a very high-resolution map of the surrounding world, and the sensors are dirt cheap. However, if the contrasts in the scene become too large, or if there is not enough light available, CV is in trouble. Also, computer vision is not good at measuring distances accurately.

LIDARs are able to create a 3D map of the surrounding world with highly accurate distance measurements. They are insensitive to glare and lack of light, but the sensor resolution is not as high as that of cameras, and the devices are expensive.

An example of the difference: Rainy dark autumn night in poor illumination. Other cars cause a lot of glare, and there is a reflectorless pedestrian in dark clothes approaching a pedestrian crossing. This situation is very tricky for a human driver, and it is very tricky for any computer vision system. However, LIDAR has no difficulty seeing the pedestrian.

Both technologies can be used with modern neural network systems, so the discussion between adaptive and mapping is actually not valid here. On the contrary, using both technologies in sensory fusion is most probably a very good approach.

Also, the gap between LIDARs and cameras is narrowing. Cameras may use active (NIR) illumination, which may help. LIDARs may be ToF (time-of-flight) sensors, essentially cameras with the ability to measure distances. That technology has improved a lot during the last few years.

And to complicate the situation even more, there are solutions between LIDARs and today's radars. Radar technology is ubiquitous, but the image created by a radar has very low resolution (due to laws of physics). However, if the frequency of the radar is increased, the resolution improves. Getting up to real millimeter wave frequencies (hundreds of GHz or even THz range) changes this. It is a continuum between conventional low-frequency (tens of GHz) radars, millimeter wave radars, terahertz radars, IR LIDARs and today's LIDARs.
 
  • Like
Reactions: SuperCachetes
I agree with level 5. At level 5 we have the potential to not even need speed limits anymore (on the highway atleast) in a world that has gone completely autonomous. But for apples sake it will be necessary at the start and as it continues to what I think secretly map out whole cities for autonomy over the next several years. Everyone knows or Atleast has the impression that Apple will enter the game with the biggest disadvantage. Nero zero real world user data. Check out that article if you haven’t. The author is very heavily hinting at Apple laying all the ground work for years since 2015 to train its autonomy vision stack. I look forward to the next decade of autonomy.
Level 5 will need and will use all possible sensors. Even if it uses cameras for basic driving, it needs ultrasonics for parking and tight spaces. Radars, LIDARs and even thermal imaging provides additional information which can be used to improve safety. Think of a situation where an animal is running across a road in the dark. Thermal imager will detect it much quicker than any vision system, and LIDAR vill be able to follow its path accurately a lot before it appears in the headlights.

When we talk about advanced level 4 or level 5, we expect the cars to be much better and safer drivers than we are. One way to achieve this is to give them much better sensors than we have. Our eyes are very good cameras, so surpassing their performance significantly is difficult. But if the situational awareness is augmented by other wavelengths and active sensors, the world changes.

Is there any reason not to use a lot of different sensors? At the moment it is a bit expensive, but the sensor cost goes down very fast as production volumes go up. ML (mostly neural networks) copes well with different sensory input. (Hey, there are even some bats which use both vision and echolocation in tandem when flying and especially when landing.)
 
Agree 100%. That’s another downfall of LiDAR is the remapping. But for Apple it will be necessary due to a lack of actual user reporting autonomous data at launch. I’ll be interested in how they dig themselves out of that hole when the Apple car launches but I think they may have a good foundation for that. Check out that TechCrunch article about Apple Maps from 2018. Very interesting about laying the possible foundation of autonomy data and some of what they are doing there sounds similar to what FDS does on the back end. Here’s something I wish I knew more about...in 2015 Apple invested 1 billion into Didi Chuxing (Uber of China and other parts of the world). They have “550 million users and tens of millions of drivers” Didi has a lot of maps data from ride sharing but they also have projects being developed and deployed for autonomous cars. As part of that investment does Apple get access to autonomy data? Gurman needs to to some investigating on that. I’d like to know.
Your comment on Didi Chuxing is spot on. I think a lot of people tend to discount that $1b investment given no one really knows what sort of return AAPL gets on that investment. From this 2017 article, looks like Cook saw the value in the the data being collected by Didi.

"
But he quickly shifted focus to the vast amount of data that Didi monitors, and its “big-data algorithms.”

“By analyzing commuter patterns the way oceanographers track the tides, Didi may help traffic jams go the way of the flip phone,” Cook writes.

The comment about the algorithms builds on Cook’s earlier comments, where he said that Apple was always on the look out for great intellectual property.

“From a Didi point of view, we see that as one, a great investment. Two, we think that there’s some strategic things that the companies can do together over time. And three, we think that we’ll learn a lot about the business and the Chinese market beyond what we currently know,” Cook said last year.

"

 
of course it's not ready for primetime. there's a big label saying BETA on the Tesla software. and it nags you to keep your hands on the wheel while FSD Autopilot is on because it simply isn't ready for all users to use. software is still in development.

multiple different sensors doesn't make sense in the long run. heavy snow/rain is going to cause lidar to fail which the car must rely on vision alone to drive. if a car can drive in heavy rain/snow with vision alone, it can do most other conditions also with vision alone.
I think we have a bit of a problem with Level 5 here.

I have been driving a bit too much in snow. The weather forecast seems to indicate that I'll face a 300-mile stretch in heavy snowfall tomorrow. In the dark the drifting snow on the road surface may cause the road to look as if it was stormy water. (On a multi-lane road the lanes tend to shift so that lane markings and the actual tracks can be quite far away from each other; which one to take?) Once the snow rises a bit higher up, all you see with high beams is reminiscent of space warp in movies. When you meet a truck (or worse, a snow plough), you will be blinded for several seconds.

Now, if we look at the legislation, in almost all jurisdictions one must be able to stop the vehicle on the visible part of the road. This means that I should practically stop my car every time I meet a truck. Of course, that would be extremely dangerous, because someone behind me could crash into my car at 60 mph. So, the only sensible thing is to drive on and hope nothing unexpected turns up within the few blind seconds. And to make things much worse, the friction coefficients tend to be rather low at those conditions, as well.

This is not the only problem caused by snow. I have personally ended up (or, rather, down) in a ditch because I drove over the shoulder. The road plus the ditch was ploughed flat, but the snow on the ditch was rather loose at it was only December (the 24th, to be more accurate, difficult to forget). So, by using my own vision, I drove off the road, and I cannot imagine how a computer vision system would not have made the same mistake.

Both examples are actually situations where we as human drivers accept excrete sometimes happens. But we are not going to accept our cars to take risks, break laws or just drive off the road.

Some things can be fixed by technology; sensory fusion will enable the vehicles to see much more than we do. So, that at least in part solves the problem. But there is still a lot we need to do with the legislation and traffic environment before vehicles are able to get to relatively unrestricted level 4 or level 5.
 
I think we have a bit of a problem with Level 5 here.

I have been driving a bit too much in snow. The weather forecast seems to indicate that I'll face a 300-mile stretch in heavy snowfall tomorrow. In the dark the drifting snow on the road surface may cause the road to look as if it was stormy water. (On a multi-lane road the lanes tend to shift so that lane markings and the actual tracks can be quite far away from each other; which one to take?) Once the snow rises a bit higher up, all you see with high beams is reminiscent of space warp in movies. When you meet a truck (or worse, a snow plough), you will be blinded for several seconds.

Now, if we look at the legislation, in almost all jurisdictions one must be able to stop the vehicle on the visible part of the road. This means that I should practically stop my car every time I meet a truck. Of course, that would be extremely dangerous, because someone behind me could crash into my car at 60 mph. So, the only sensible thing is to drive on and hope nothing unexpected turns up within the few blind seconds. And to make things much worse, the friction coefficients tend to be rather low at those conditions, as well.

This is not the only problem caused by snow. I have personally ended up (or, rather, down) in a ditch because I drove over the shoulder. The road plus the ditch was ploughed flat, but the snow on the ditch was rather loose at it was only December (the 24th, to be more accurate, difficult to forget). So, by using my own vision, I drove off the road, and I cannot imagine how a computer vision system would not have made the same mistake.

Both examples are actually situations where we as human drivers accept excrete sometimes happens. But we are not going to accept our cars to take risks, break laws or just drive off the road.

Some things can be fixed by technology; sensory fusion will enable the vehicles to see much more than we do. So, that at least in part solves the problem. But there is still a lot we need to do with the legislation and traffic environment before vehicles are able to get to relatively unrestricted level 4 or level 5.
If a human driver is unable to drive in certain snowy conditions, then we shouldn't expect a level 5 car to.

Problem is that level 5 doesn't state what kind of driver should be the standard. A 70 year old would likely crash more than a 20 year old in snowy conditions for example.

With that said, I don't think a Tesla would do all the things a Level 5 has formally stated to do. For example, a human is able to drive a car inside a mall (perhaps to park a car for a give away sweepstakes). Tesla hasn't trained their software to handle that capability but a Level 5 formally says it should be able to carry out that task.
 
  • Like
Reactions: Si Vis Pacem
If a human driver is unable to drive in certain snowy conditions, then we shouldn't expect a level 5 car to.

Problem is that level 5 doesn't state what kind of driver should be the standard. A 70 year old would likely crash more than a 20 year old in snowy conditions for example.

With that said, I don't think a Tesla would do all the things a Level 5 has formally stated to do. For example, a human is able to drive a car inside a mall (perhaps to park a car for a give away sweepstakes). Tesla hasn't trained their software to handle that capability but a Level 5 formally says it should be able to carry out that task.
Yes, true level 5 is very, very, very, very difficult. We have had a pretty snowy winter. I need to do parallel parking in relatively tight spaces with knee-deep of slushly snow (with progressively hardening bits of ice in it). Having more than two decades of that experience, I only get stuck a few times a winter. That is why I have a snow shovel in the trunk, as do almost anyone else who needs to park in the same snowy mess before someone cares to come and collect the snow. (And once it comes, the cars are behind a hip-deep wall of snow. Shovelling is great exercise, but slightly beyond autonomous vehicles' capabilities.)

Another wintery example. When the friction coefficient is low, it is often different for different vehicles depending on the tyres and ABS/ESC functioning. So, an autonomous vehicle must keep a long distance to play safe, as it may happen that the vehicle in front may have much better grip. However, if all cars keep — say — a three-second distance to the car in front of them, the road capacity is less than 1200 vehicles/lane/hour (as every vehicle takes more than 3 seconds). Still, in practice there are at least 2000 vehicles per lane per hour. This, of course, means that if something unexpected happens, local garages will be employed for a while.

But we accept the risk, because otherwise the traffic would come at a standstill. Our traffic system is built that way. (The friction coefficient is important here, because inter-vehicle communication does not help much with this specific scenario. In dry conditions decent V2V helps to reduce distances.)

I know snow is not a real problem in most parts of the world, so it is an extreme example. However, level 5 definition carries the words "in all conditions". Of course, "the most skilled driver" and "all drivers" are different, but even when we talk about the median driver, the problem is there. Removing the steering wheel from the car is difficult unless we accept autonomous vehicles may take relatively large risks — just as human beings do. I do not think we will. We need to rethink vehicle traffic — both the road environment and the legislation – before level 5 happens. It will take decades.

Level 4 is more realistic, because the definition is that the car drives by itself in limited conditions (limited weather conditions, limited geographic area). That may happen relatively soon, but it will be restricted to good conditions and certain roads. Highways are the easiest environment, but local taxis/deliveries in some areas may be possible within a few years.

(A reality check. We are right now at level 3; the driver needs to be ready to take control, and the cars are at level 3 only in good weather, on certain highways at below 38 mph.)
 
Surprised no one has mentioned Aeva. They claim to be working on “4D LIDAR” (whatever that means... fancy term for ToF?) but they were founded by two former Apple engineers, and have partnered with Denso for production.
 
Your comment on Didi Chuxing is spot on. I think a lot of people tend to discount that $1b investment given no one really knows what sort of return AAPL gets on that investment. From this 2017 article, looks like Cook saw the value in the the data being collected by Didi.

"
But he quickly shifted focus to the vast amount of data that Didi monitors, and its “big-data algorithms.”

“By analyzing commuter patterns the way oceanographers track the tides, Didi may help traffic jams go the way of the flip phone,” Cook writes.

The comment about the algorithms builds on Cook’s earlier comments, where he said that Apple was always on the look out for great intellectual property.

“From a Didi point of view, we see that as one, a great investment. Two, we think that there’s some strategic things that the companies can do together over time. And three, we think that we’ll learn a lot about the business and the Chinese market beyond what we currently know,” Cook said last year.

"

It’s exciting stuff. I’d love to know all the details of this. Those are great quotes. They have a lot of data. Check out their website and turn on translation unless you are fluent in Chinese. Lol. They have dedicated pages and projects about big plans for their technology. Didi Brain, AI Labs, autonomous driving, Didi Cloud. If these things pan out the way Didi hopes that they do and Apple has access to it Apple has the potential to take everyone buy surprise.
 
Level 5 will need and will use all possible sensors. Even if it uses cameras for basic driving, it needs ultrasonics for parking and tight spaces. Radars, LIDARs and even thermal imaging provides additional information which can be used to improve safety. Think of a situation where an animal is running across a road in the dark. Thermal imager will detect it much quicker than any vision system, and LIDAR vill be able to follow its path accurately a lot before it appears in the headlights.

When we talk about advanced level 4 or level 5, we expect the cars to be much better and safer drivers than we are. One way to achieve this is to give them much better sensors than we have. Our eyes are very good cameras, so surpassing their performance significantly is difficult. But if the situational awareness is augmented by other wavelengths and active sensors, the world changes.

Is there any reason not to use a lot of different sensors? At the moment it is a bit expensive, but the sensor cost goes down very fast as production volumes go up. ML (mostly neural networks) copes well with different sensory input. (Hey, there are even some bats which use both vision and echolocation in tandem when flying and especially when landing.)
What’s most important to this is and even bigger than level 5 is all the cars need to be able to talk to each other via some sort of standard technology. Your Tesla may be able to talk to another Tesla two cars up but what about the Ford stuck in the middle? It’s at a huge disadvantage not having the data of what’s ahead and behind. I’m not sure if manufacturers will ever agree upon this since everyone wants to do what best for their brand.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.