I didn't really understand the point of this. They never showed how this connect to iOS. We just saw some toy cars.
Toy cars which analyze their surroundings up to 500 times per second and then use that information to decide what to do next.
Don't see it as toy cars. They are autonomous, self-driving robotic vehicles operating in real-time.
At any given snapshot in time they need to know where they are (presumably using sensors to determine their relative position on the track, though it is possible they have access to the top-view camera for image processing -- I didn't catch the whole demo so I don't know). How far are they from the edges of the "road"? Are there any obstacles or other vehicles nearby? If so, which ones? Where will they be in the next snapshot if they continue their current velocity? Should they apply brakes? Accelerate? Steer left or right?
And that's just to maintain a course around the track. They demonstrated that each vehicle can also be given mission parameters. "Block that other car". Now they need to calculate even more than just what they are doing, they need to also sense the position of the other cars and calculate what
they are doing, and then calculate a reactionary move to foil the opponent. Now they have multiple levels of operation -- level 1 is "survive" (e.g. don't crash or fall off the track), level 2 is "fulfill my mission".
Did I mention this was all happening 500 times per second?
And all being calculated on an iOS device?
Presumably one iOS device controls each vehicle. So there is a steady stream of chatter going back and forth over Bluetooth. The vehicle probably sends up telemetry about what it sees/senses, then the iOS app calculates what it thinks the car should do next, then sends a command back via Bluetooth.