Following the launch of the new iPad Pro, and the arrival of iOS 13.4, Apple has also launched the next evolution of ARKit. Enter ARKit 3.5, which the company just launched for developers. The update takes full advantage of the LiDAR scanner in the new iPad Pro models, along with the new depth-sensing system. This will help support a “new generation” of AR apps.
RKit 3.5 takes advantage of the new LiDAR Scanner and depth-sensing system on iPad Pro to support a new generation of AR apps that use Scene Geometry for enhanced scene understanding and object occlusion. And now, AR experiences on iPad Pro are even better with instant AR placement, and improved Motion Capture and People Occlusion — all without the need to write any new code.
ARKit 3.5 features new APIs as well, including Scene Geometry, Instant AR, and Improved Motion Capture & People Occlusion. Here’s how each breaks down:
Scene Geometry lets you create a topological map of your space with labels identifying floors, walls, ceilings, windows, doors, and seats. This deep understanding of the real world unlocks object occlusion and real-world physics for virtual objects, and also gives you more information to power your AR workflows.
The LiDAR Scanner on iPad Pro enables incredibly quick plane detection, allowing for the instant placement of AR objects in the real world without scanning. Instant AR placement is automatically enabled on iPad Pro for all apps built with ARKit, without any code changes.
Improved Motion Capture and people occlusion
With ARKit 3.5 on iPad Pro, depth estimation in People Occlusion and height estimation in Motion Capture are more accurate. These two features improve on iPad Pro in all apps built with ARKit, without any code changes.
The LiDAR scanner is only available in the new 11- and 12.9-inch iPad Pro models. However, it is rumored that Apple is going to welcome the same hardware feature with the iPhones launching later this year.