Drones are very efficient at capturing a large volume of data that often requires hours of post-processing in the cloud to create 3D reconstructions that can be used for 3D visualization and other applications. In his talk at SPAR3D 2018 in Anaheim Michael Jones of Lockheed Martin showed how a drone with just a photo camera and access to a GPU (graphical processing unit) on a laptop can use SLAM for real-time 3D reconstruction enabling advanced visualization and 3D modeling for immediate ‘in context’ information. The ability to generate 3D point clouds and 3D imagery in real-time as a vehicle flies enables applications that need to operate real-time, for example, the ability to automate the inspection of large critical infrastructure in real-time, comparing architectural drawings to actual 3D imagery for immediate progress updates.
SLAM is a technology that uses the identification of 3D objects from a 2D or other camera to identify 3D objects which can be used for 3D reconstruction. Applications include anything requiring 3D information such as collision avoidance, inspections, or navigation in GPS-denied locations. I have blogged about a a UAV equipped with LiDAR and using SLAM for navigation can be used in areas where GPS is not available. But LiDAR is heavy and expensive and requires an expensive drone to carry it for a significant amount of time.
Modern GPUs are designed for massively parallel computation and SLAM, which is based on triangulation, can be parallelized to take advantage of this capability. Laptops equipped for gaming include very fast GPUs. In his talk Michael demonstrated how a drone with just a photo camera connected via a wireless link to a laptop can use SLAM to create a 3D reconstruction of an area in real-time. The software, developed in Canada, is called Hydra and is commercially available. The pro Michael hinted at work underway that would enable SLAM processing onboard the drone.
Comments