The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.
One principle that makes AR more realistic is occlusion, or the “ability for digital objects to accurately appear in front of or behind real-world objects.” It lets applications make sure objects are not just floating in space or placed in a physically impossible position. This is particularly useful for making apps that let you demo furniture in your living room more realistic.
This one lens approach lowers the barrier to technology by not requiring specialized cameras and sensors. That said, the Depth API will only improve with better hardware in phones. Google is letting developers collaborate on the new ARCore Depth API today.
For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion — the ability to occlude behind moving objects.