There is a huge volume of satellite imagery that is being captured and has been captured over the past couple of decades. The challenge is to automatically find objects such as ships, trees, buildings, or containers in this vast amount if data and track changes in them over time. This morning at at this year's FOSS4GNA (Free and Open Source Software for Geospatial North America) get together in St Louis, Chris Holmes of Planet Labs, gave an insightful overview of the application of deep learning to geospatial data, identifying opportunities for the open source community to lead in the application of this data and technology to addressing this challenge.
Outside of Google, Planet Labs, DigitalGlobe and some others deep learning has been applied primarily to recognizing people in selfies and dogs and cats from peoples photos on online repositories such as Flickr. I have blogged previously about competitions DigitalGlobe (which has 100 petabytes of imagery) and Nvidia have hosted into automating building footprint and road network recognition. Planet Labs has a computer vision and analytics team that has been applying the results of academic research to geospatial data, primarily imagery from Planet's constellation of 150+ satellites. For example, they have used deep learning to detect building footprints and road networks in Dar es Salaam. Chris showed examples of airplane, ship and container recognition achieved using deep learning. But these are research pilots. One company has applied deep learning to scale for a practical application. Google has been developing and applying the technology to building recognition - automatically identifying building footprints and features such as aerials. The deep learning algorithms have been developed by academia and as a result the code for the most part is open source. For example, a deep neural network model developed originally for medical image segmentation called U-Net has been applied to identifying building footprints. Successful training of deep networks requires many thousand labeled training samples and this training data is not open source. Labeled data involves people on the ground manually ground-truthing building footprints and other features so that the deep learning algorithms can learn what to recognize.
Chris believed that it is time to open these models to a broader group. There are some technical things that need to happen to make this possible. We need new formats for exchanging raster and vector data. Cloud optimized GeoTiff (COG) makes it possible to stream imagery from cloud sources such as Amazon S3. Planet, Google Earth, and QGIS already support COGs and there are other open source tools such as COG-Explorer that support them as well. Chris and others are working on WFS 3.0, a completely new developer-friendly version of an existing OGC (Open Geospatial Consortium) standard for exchanging vector data over the web.
Another technical advance that is required is a common way to query satellite data based on date, time, location, type of bands, and so on. Chris and others have been working on a standard for searching satellite imagery for objects. The SpatioTemporal Asset Catalog, known as STAC, is an open specification that came about when fourteen different organizations came together to increase the interoperability of searching for satellite imagery. Currently when a user wants to search for all the imagery in their area and time of interest they can’t make just one search — they have to use different tools and connect to API’s that are similar but all slightly different. The STAC specification aims to make that much easier, by providing common metadata and API mechanics to search and access geospatial data. Chris emphasized that standards are essential for opening up satellite data to processing and visualization and encouraged the open source community to get involved with the OGC to further developer-friendly standards.
Currently in order to use imagery data, a lot of data preparation is required of the end user such as lining up pixels, assigning a projection, removing clouds and mist, and other corrections. We need to move toward analysis ready data where data derived from different sources can be combined for analysis.
Because of the huge volumes of computational intensity we need cloud-native engines to perform the analyses. Google, DigitalGlobe (Maxar), Planet have these deep learning engines, but Chris encouraged the open source community to get more involved in developing these engines.
An essential piece that is required to open up the data and technology to a broader audience is open labeled data, Chris referred to it as a "open labeled data commons". There are analogies with OpenStreetMap initiative. Chris suggested that something like this is required for labeled data - it needs a repository of open data, tools to capture and manage the data, and a community around this to capture and curate that data.
The other key piece is converting this data into actionable information. For example, dashboards that would alert municipal governments to un-permitted development, forestry enforcement to illegal forest cutting, or commodity price monitoring agencies to changes into the flow of ships into harbours or to large changes in containers at container ports.
Taken together Chris suggestions and work underway represent a roadmap for the open source geospatial community to lead in making the huge volumes of satellite imagery and other data queryable and able to generate actionable information.
Comments