There are two advances in technology that are having almost as much impact on mapping our world and tracking the location of things in it as the introduction of the GPS. The development of microelectromechanical systems (MEMS) in the 1980s and 1990s has made possible the mass production of micro inertial measurement units (IMU) comprised of a combination of accelerometers, gyroscopes, and magnetometers. These have enabled the smart phone which contains a range of sensors, typically an accelerometer, thermometer, gyroscope, pressure sensor, humidity sensor, light sensor, location sensor (GPS),and a magnetometer. The second major advance is fast structure-from-motion (SfM) algorithms. One of the first applications is simultaneous localization and mapping (SLAM) technology which was developed for autonomous vehicles. In 2012 GeoSLAM was developed which made it possible to solve a major spatial problem, accurately tracking the location of a person inside a building or in an obstructed environment such as a city. Recently SfM has been successfully applied to indoor mapping in the form of new professional-level products from Indoor Reality and Leica Geosystems (Pegasus Backpack).
It turns out that structure-from-motion technologies can also be used for on-site visualization of a virtual BIM design in a real urban context. At this year's Geo Business 2016 I had a chance to talk to Crispin Hoult about an augmented reality product called UrbanPlanAR his company Linknode is developing with SfM technology from Heriot-Watt University that is designed to do this on consumer handheld smart devices such as an iPad.
Background
For years researchers and technologists have been struggling with a major spatial challenge - accurately tracking the location of a person in an obstructed environment such as a city or inside a building in 6 dimensions, 3D location and 3D orientation. Stucture-from-motion solves this problem and enables you to track your position and your orientation in an obstructed environment where GNSS satellite signals are unavailable or unreliable. Typically input from an inertial measurement unit (IMU) and LiDAR is used to construct in real-time a 3D map of the immediate environment including objects such as walls and other objects. As you move, the map is reconstructed and this is used to record your trajectory. The algorithm is robust and helps to addresses some of the problems, such as drift, that results from using just an accelerometer.
Indoor mapping products, such as those based on GeoSLAM, assume that you are working in an unknown environment. But what if you already have information about the environment in the form of GIS maps, point clouds, and other georeferenced information ? Tracking your location in a known environment using just a smart handheld device is the problem that Crispin Hoult and his company Linknode are addressing. The specific business opportunity he is targeting with this technology is on-site (in-field) augmented reality visualization of BIM designs for new buildings with a consumer smart handheld.
Technically SfM algorithms require multi-sensor input. The typical smart phone or tablet has many motion tracking MEMS devices built into it. If you are outside in an unobstructed environment, the GPS sensor can reliably report your location to within 5 or 10 meters. But when you enter an obstructed environment where you have reliable georeferenced information, you need an algorithm that uses the camera and the inertial navigation sensors in the handheld tablet to accurately track its location and orientation in real-time. Crispin has been working with Heriot-Watt university to "mobilize" an algorithm that does just this. Basically the algorithm uses “visual-inertial tracking” for the online tracking of the 3D position and 3D orientation of the tablet. This is supported by ‘prior’ learning of the visual and structural appearance of the environment through structure-from-motion (SFM).
Crispin, what is the business problem that UrbanPlanAR addresses ?
We are targeting UrbanPlanAR at on-site visualization using only a consumer smart handheld. We are making it possible for an architect, owner, or city planner to come to a site with a handheld tablet and in real-time overlay a BIM model showing the design for a new building in the context of the real world. This is an application of augmented reality. What you see on the tablet is the real world and into that we superimpose the BIM model. Our focus right now is to visualize the building design in its urban context - making it possible to walk around in the area and see the building from different viewpoints. We add value by providing more trusted, more contextual visualizations so people can make decisions and engage more with the planning process.
The alternative way of doing this is to create a completely virtual world (often referred to as black box), but then you have problems with resolution and realism. We have research carried out by the University of Glasgow that shows that by doing on-site visualization and not blackboxing it, it becomes more trustworthy and enables more engagement from stakeholders.
Another way to look at the business value of UrbanPlanAR is that it adds value to your BIM investment. Designers and contractors invest in BIM because it enables automated clash detection, material takeoff, quantity surveying, construction staging and many other applications. One of the important value-adds that hasn't been fully recognized is visualization. Our technology can be another strong reason for creating a BIM model in the first place. If you have already developed a BIM model, it adds value to your model. With our software you can take your BIM model, walk out to the proposed site and show all stakeholders - potential customers, city planners, city engineers, and nearby land and building owners what the proposed structure is going to look like on site, in a real world context.
How does this work, what do I need to do to make this augmented reality visualization of a building design ?
The first step is to choose the context. Unlike the traditional approach where we would choose a location from which we want to view the building design, with UrbanPlanAR we choose an area of interest, which could be a city square, for example. Then we collect georeferenced information about the area we have chosen. Currently we use imagery captured specifically for that viewpoint site, but this will soon allow other alternatives to be used, such as existing datasets, for example, a city model, LiDAR scan, oblique imagery, or satellite or overflight imagery.
The next step is to get the BIM model of the design from the architect. To do this we plug into BIM 360, which is the Autodesk data store for BIM models. For smaller projects we can take a model published directly from Revit.
At this point we have the context and we have the BIM model. Then we simply take the mobile device out into the field on a tripod or simply held in the hand, and UrbanPlanAR will use the context to work out exactly where it is and enable you to see in real-time your design within the real world – displaying the BIM model not only “onto”, but visually “within” existing reality.
Unlike Virtual Reality systems where there are cost and time limits to the digitization process, what you see in augmented reality is 3D, infinite resolution that all comes for free as the real world. What we are showing is a combination of real and virtual. As users stand at the site, they see exactly what is there plus the change. That's what adds to the trust - you are on-site, out in the field or the city square, walking around and showing stakeholders what is actually there with the superimposed virtual model. You can also record it so you can replay it back in the office. That real-world GIS-based augmented reality we call that GIality. Everything is georeferenced, everything is anchored in the real world. That is what links the virtual to the real world. You need to know where the virtual building is in real-world coordinates. The technical challenge is that you also need to know where the observer is and his/her orientation as he or she walks around the site.
Who do you foresee is the end user of UrbanPlanAR ?
I would say that the primary application is early stage project feasibility and development for a new building. It could be used by the architect or by the developer who is funding the architect. It could also be used as a sales tool at this stage.
The second stage is the planning stage. When you are looking at permitting you need the convince the city or town council, who are the custodians of the city. Typically landscape architects are hired to develop and present the environmental statement and impact of the project to the city planner. But we now live in a world where we no longer have to rely on consultants to convey information. Nowadays the developer may do this him or herself. The city or town planners may also be users - as validators of what has been presented. UrbanPlanAR broadens the breadth and depth of 3D information available about the project. It is also interactive in that the architect can make changes to the BIM model and synchronize that to the users. Another advantage is that the users can go out to the site at any time to inspect some detail of the design that they may have missed.
What we have found useful is to interactively display different surfaces or finishes, different building heights, and different models. Preconstruction changes are very easy to do. The architect goes away after getting feedback from stakeholders, creates a new or modified BIM model, uploads it the cloud, and then a user synchronizes with the store and takes it out to the site. By doing this at the feasibility stage, early on in the planning process, communities become more involved and don't see what is being presented as almost a fait accompli. You want them to leave feeling they are actively involved in the design process - then you get more buy-in and there is less likely to be an appeal during the construction stage.
Does the BIM model have to be a Revit model ?
We integrate with BIM 360 today and we are planning to support Bentley BIM. In addition Industry Foundation Classes (IFC) is one of our supported file formats. But this is not yet equivalent to importing files directly from Revit or Bentley BIM, because exported IFC files from some BIM vendors do not support texture.
What types of hardware does your software run on ?
We support the iPad Pro and we are working on a Microsoft Surface port.
Does UrbanPlanAR require preprocessing in the cloud or on a desktop ?
Neither the BIM model nor the geoferenced context ends up being the operational model we need to do the localization, but both are valid inputs. All the input data including GIS context and Revit model is preprocessed on a desktop or in the cloud and then synchronized to the handheld.
Crispin, you already have a product that is designed for rural environments but also relies on augmented reality. Can you tell me about that ?
Yes it is called VentusAR, and it is designed to work in a rural environment. It has been targeted on visualizing wind farms and solar farms for land owners, communities and planners because these farms are very contentious issues. Before our our product became available, the standard methodology was to take photographs out in the field, go back to the office, process them on the desktop to incorporate the proposed wind or solar farm, send the resulting images to the printer and then show the picttures to the stakeholders. Because of the time and cost it tended to be done late in the planning process, which made it difficult to make changes.
Late last year for the first time we used VentusAR for visualizing transmission towers. And a few weeks ago we did our first modeling of transmission lines. As with UrbanPlanAR, VentusAR uses the tablet’s integrated camera, GPS position and 3D gaming capabilities to create an accurate, location-specific, realistic view of transmission tower models in context. You can investigate different scenarios with different stock models like towers, turbines, and panels. It works well in a rural environment because our georeferenced information about the real world is typically a digital terrain model. In a rural environment, if your GPS is out by 5 meters and you are looking at something a hundred meters away, it doesn't matter that much. In an urban environment where you need more precision and can't rely on GNSS, you need a more sophisticated approach, which is what we have done together with Heriot-Watt University for UrbanPlanAR.
The project is supported by Innovate UK, the United Kingdom’s innovation agency (Project 102040).
Comments