Today's 3D scanners are able to generate 3D models in which humans can easily recognize objects such as vegetation, ground, buildings, power transformers, utility poles and cables, and windows and doors. Automating this process of classifying point clouds or meshes, also known as feature extraction, will enable automation of many currently time consuming workflows in many industries. In a panel discusion at the SPAR3D 2017 conference this year Greg Bentley, CEO of Bentley Systems, alluded to a Bentley-Microsoft project to apply machine learning to classifying reality meshes and hinted that we might see fully classified reality meshes demonstrated at next year's joint SPAR3D + AEC ST conference in Anaheim.
One example where automating feature extraction would provide big benefits is automating vegetation management for transmission lines. Periodic transmission line inspection is mandated for North American transmission lines by NERC. It is currently carried out by helicopter-born LiDAR which is expensive and for that reason not done more often than is required by NERC. Full automation, which is possible with beyond-visual-line-of-site (BVLOS) autonomous UAVs and automated feature extraction of lines, pylons, vegetation and ground, would dramatically reduce the cost of inspection and enable more frequent flyovers. The major benefits would be lower costs and a reduction in the number and length of outages resulting from vegetation transmission line interactions.
The latest 3D scanners, which are capable of creating high precision photorealistic models of just about anything, from digital terrain models and cities to detailed models of cathedrals, are getting much more affordable and accessible. The current lowest price point is about $400 to $450 for adding 3D scanning capability to a tablet. In the future 3D scanning and imaging may simply be built into smart phones. Automating feature extraction from the exponentially increasing volume of 3D data, whether point clouds or meshes, is going to grow in importance.
But automating feature recognition and extraction from a point cloud or mesh remains a challenge. Researchers have attempted to use machines to recognize features in imagery ever since computers were able to manipulate and analyze raster data. There are cases where imagery contains a restricted set of types of objects in a known environment which makes feature recognition simpler. For example, identifying icebergs in satellite or overflight imagery of the North Atlantic. As another example, in a refinery or a chemical plant most of the objects are pipes, structural steel, and ducts.
For this type or environment with well-defined geometrical objects, software such as Trimble RealWorks (TRW) can identify walls, floors and ceilings inside buildings and ground, buildings, vegetation, and curbs and gutters outside. At SPAR3D it was announced that the latest version of TRW 10.3 can even resolve power lines, which is critical for automating vegetation management for transmission lines.
I have blogged about Indoor Reality who have developed technology that can track accurately the location and orientation (six degrees of freedom) of a backpack containing LIDAR and other sensors as the operator walks through a building going into rooms, halls, up and down stairs and into other human accessible spaces. Software tools are provided to generate intelligent triangulated mesh surfaces. The package includes analytical software tools for automatically generating floor plans, distinguishing rooms, identifying staircases, and identifying doors and windows, in fact, everything needed for a Revit model of the interior of the building.
But for general imagery automating feature extraction typically follows the Pareto principle - extracting 80% of the features is easy, but the remaining 20% is difficult and requires most of the effort. Currently companies such as Clearedge offer specialized software, Edgewise, for classification of 3D imagery. Trimble has licensed Edgewise from Clearedge for construction applications and it can be used to extract pipes, conduits, structural steel, ducts, structural concrete, walls, windows and doors from 3D scans. But the current algorithms are not able to classify everything and the process remains semi-automated - Clearedge claims that in general Edgewise can speed up the process of feature extraction by 70 %. I am looking forward with extreme interest to seeing fully classified reality meshes demonstrated, perhaps as early as next year's joint SPAR3D + AEC ST conference in Anaheim.
Comments