Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

100001

macrumors newbie
Original poster
May 27, 2021
1
1
The patents for Apples new AR/VR device suggest it will be using dynamic vision sensors on the headset. DVS are a new type of sensor that allows for very high frame rate, very high sparsity data.

This will give the device extremely low power sensors.

To understand what an event camera is, you can read about it here:



Some of the patents discussing Dynamic Vision Sensors:

https://patents.google.com/patent/US10845601B1/en

AR/VR controller with event camera

Abstract
In one implementation, a method involves obtaining light intensity data from a stream of pixel events output by an event camera of a head-mounted device (“HMD”). Each pixel event is generated in response to a pixel sensor of the event camera detecting a change in light intensity that exceeds a comparator threshold. A set of optical sources disposed on a secondary device that are visible to the event camera are identified by recognizing defined illumination parameters associated with the optical sources using the light intensity data. Location data is generated for the optical sources in an HMD reference frame using the light intensity data. A correspondence between the secondary device and the HMD is determined by mapping the location data in the HMD reference frame to respective known locations of the optical sources relative to the secondary device reference frame


https://patents.google.com/patent/US20200348755A1/en

Event camera-based gaze tracking using neural networks

Abstract
One implementation involves a device receiving a stream of pixel events output by an event camera. The device derives an input image by accumulating pixel events for multiple event camera pixels. The device generates a gaze characteristic using the derived input image as input to a neural network trained to determine the gaze characteristic. The neural network is configured in multiple stages. The first stage of the neural network is configured to determine an initial gaze characteristic, e.g., an initial pupil center, using reduced resolution input(s). The second stage of the neural network is configured to determine adjustments to the initial gaze characteristic using location-focused input(s), e.g., using only a small input image centered around the initial pupil center. The determinations at each stage are thus efficiently made using relatively compact neural network configurations. The device tracks a gaze of the eye based on the gaze characteristic




Further to this, it appears that Apple are mentioning spiking neural networks for deformable object tracking.


https://patents.google.com/patent/US20200273180A1/en

Deformable object tracking

Abstract
Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.
 
Last edited:
  • Like
Reactions: 2016
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.