I thought it would be worthwhile to include here an expanded version of my contribution to a webinar that IQGeo presented last week with a provocative title that I think goes right to the nub of the challenge that telecom and utility companies are facing in developing digital twins of their network assets. In this new world there is an exponentially increasing volume of real-time data and less and less time to conduct analyses and make decisions. Telecoms and utilities and finding that their legacy GISs are simply not up to the task. The webinar aimed to shed some light on the problems that telecoms and utilities are facing with their legacy GISs. The webinar speakers were Linda Stevens, Managing Partner at 51by1, Peter Batty, Chief Technology Officer at IQGeo, and myself. It was moderated by Steve Tongish of IQGeo. The full webinar is available on demand here.
Geoff Zeiss opening comments
Common symptoms of the failure of legacy GIS include as-built backlogs stretching into months, inaccurate as-builts submitted by contractors, limited updates from the field, frequent underground infrastructure damage during construction, inability to connect customers to transformers or feeders, and repeats (multiple truck rolls because of inaccurate information about field assets).
As-built backlogs are the result of a construction information flow involving different organizations with different tools. Engineering uses CAD, construction relies on paper drawings, and records management manages data in a GIS. The major factor contributing to as-built backlogs is the decades old lack of interoperability between CAD and GIS. The result is that the GIS is inaccurate and permanently out-of-date.
During the webinar there was a poll Poll asking webinar participants how long is their average as-built update backlog time?
8 % Less than 1 week
8 % Less than 1 month
21% Less than 3 months
25% More than 3 months
38% Don’t know or not relevant
It is indicative of the problem that of those from utilities or telecoms only a quarter had a backlog of less than a week. (There were a significant number of people participating in session who were vendors or service organizations for whom the question did not apply hence the 38% response to the last option.)
Another problem relating to as-builts is lack of quality control. Construction contractors often submit as-designeds rather than what was actually constructed. Contractors may also red-line as-builts to indicate the difference between the design and what was constructed, but these are often sketches rather than accurately surveyed data. The result is that sometimes it is uncertain on which side of a street a cable or pipe has been installed or whether it was actually installed on private property instead of on the public right-of-way.
If there was an effective feedback mechanism these problems might be ameliorated over time, but getting updates from the field into the GIS in a timely fashion is a challenge for many utilities and telecoms. I have encountered telecoms and utilities where updates from field staff did not get into the GIS at all or were seriously backlogged. Not seeing updates reflected in the GIS in a timely manner discourages field staff from reporting changes and errors. The result is that over time the quality of GIS data degrades.
Inaccurate GIS data is responsible for 400,000 incidents of underground infrastructure damage every year in the U.S. This represents a serious risk for workers and the public. Just a week ago there was an explosion in Murrieta, Los Angeles which left one person dead and fifteen injured. In the last few months there have been explosions in downtown San Francisco, Durham NC, and I could go on. It is also a multi billion dollar drag on the economy and is responsible for many construction projects running over budget and over schedule.
Missing or incomplete connectivity linking customers to transformers and feeders directly affects outage duration. It is not uncommon for a utility not to know the correct connectivity for 15-20 % of its customers ( I would add that knowing the phase for each customer is often a bigger problem). Not being able to identify a damaged upstream device when a customer reports an outage or being able to identify which customers are affected by a failed device directly impacts outage duration and restoration (SAIDI and CAIDI).
Inaccurate GIS data is also responsible for "repeats", where more than one crew trip is required to maintain a piece of equipment or fix a problem. Typically it involves one trip to determine where an asset is and its details and another to return with the required equipment to resolve the problem. This also impacts outage duration statistics.
Other symptoms include redundant data where the same information is maintained simultaneously in different GISs by different groups within a utility and difficulty in reporting accurate situation assessments of network assets to management and regulators.
Question: - Utilities devote huge resources to maintaining redundant geospatial data in different systems, much of which is inaccurate and out of date. This unreliable GIS data directly affects outage statistics, field staff efficiency, the reliability of reporting to regulators, and the ability of management to get an accurate perspective on operations. Why has this situation persisted for decades?
There are several reasons for this problem persisting as long as it has. Among the top technical reasons is the incompatibility between CAD and GIS data. An important contributing factor is that multiple organizations and departments are involved in the workflow. A fundamental problem is procurement practices that don't treat data as important as the physical assets themselves. An enlightened procurement policy can resolve this problem. I worked with a large Brazilian telecom that enforced a policy that contractors were only paid when as-builts were in the GIS and verified. Another fundamental problem is that many regulators have not recognized the value of accurate, up-to-date GIS data. I only know of a few that recognized how important this is for utility operations. For example, about a decade ago ANEEL, the electric power regulator in Brazil, imposed a rule that all utility GISs had to be 95% accurate or better. This was enforced by requiring frequent audits of utility GISs with penalties imposed if 95% was achieved.
Another important reason that this problem has persisted is that in some countries including the U.S. and the U.K. there is a liability structure that invariably assigns responsibility to the construction contractor for damage to underground infrastructure during construction. This provides little motivation for telecom and utility network operators to improve the reliability of their GIS data. In some countries such as France liability is shared between the telecom or utility operator and the contractor. For example, if a contractor hits a utility facility that is shown accurately on utility GIS maps, it is the contractor's liability. Conversely if a contractor hits a utility facility that placed incorrectly on utility GIS maps, it becomes the utility's liability.
Question: What is the impact of reality capture on how we create and maintain geospatial data? Can capture of network connectivity be fully automated ?
I have seen estimates of telecom and utility GIS accuracy of 70-80 % or even lower. With recent advances in reality capture a question that utilities are asking themselves is whether it is more cost effective to conduct a complete reinventory of its network assets, to recapture the location and attributes of its field equipment, or to try to correct the existing GIS data. Recently a very interesting pilot was carried out by Duquesne Light in western Pennsylvania. It compared the network model derived from a standard walking inventory with a model derived from LiDAR/photo data captured with a truck and post-processed in the office. It found that the model based on LiDAR scans was just as reliable as the walking inventory and much safer. In addition the LiDAR scan provided significant side benefits. For example, It could be used for a joint-use audit. The LiDAR data also allowed clearances between poles and cables to buildings and structures to be accurately measured to ensure regulatory clearances and to verify third party attachment heights.
In a related development machine learning is being applied to automatically recognize different types of telecom and utility equipment in LiDAR and photo imagery including high resolution satellite imagery and to automate inspections for maintenance issues such as corrosion, vegetation encroachment, and other issues. To support this application of AI, EPRI is collecting training data sets for utility applications of machine learning. A startup SiteSee claims that it can inspect telecommunications towers with a UAV and automatically analyze the resulting imagery for problems with antennas. I don't think that we are quite at the stage of network connectivity being routinely determined using these techniques, but I suspect that the day is fast approaching when this will be possible.
Question: Poor GIS data quality for underground assets within the utility industry can have direct and very serious consequences in injuries and lost lives, not to mention the huge financial impact in damages and fines [400 lives, thousands of injuries and a $50 billion drag on the U.S. economy arising from 400,000 cases of utility damage every year]. Can new tools for underground asset capture and the sharing of this data begin to mitigate these risks?
There are two major challenges in improving the reliability of our maps of underground utilities, remote sensing technology to detect them without excavation and sharing the information that is captured everyday about utilities exposed during construction excavation. In the last few years there have been important advances in both areas.
One of the most versatile technologies for detecting underground utilities is ground penetrating radar (GPR). It can detect underground objects that electromagnetic techniques can't and it is better at estimating depth than other technologies. This makes it possible to generate 3D maps of underground utilities from GPR scans. However, until now it has required a trained geotechnologist to interpret the radar scans. GPR arrays have typically been mounted on a pushcart making scans slow to acquire and hazardous for the operator. In the last year or two new technologies have been developed that allow GPR scans to be acquired much faster using a vehicle. With this approach there are no boots on the pavement making it much safer for the operator. It is also much faster. The most recent technology advance makes it possible to capture GPR scans using a vehicle travelling at up to 135 km/hour. Another advance is combining a GPR array with a LiDAR scanner on the same vehicle to capture below- and above-ground scans simultaneously. A major advance is new GPR software processes multiple GPR scans to present a view of the underground that enables non-geotechnical professionals to interpret GPR scans. These are very important advances in the detection technology and in democratizing its use.
In North America every state and most provinces have a government mandated one-call centre which requires anyone planning to excavate to call 811. It requires all utilities and telecoms with services in the area of the excavation to paint the ground indicating the probable location of their cables and pipes. In the U.S. the locate industry is estimated to amount to billions of dollars annually. But in general this information is not shared and the same underground assets may be located over and over again. Recently there are initiatives at the city, regional, and national levels to share information about the underground. The most recent is the National Underground Asset Registry (NUAR) in the U.K., supported by the Geospatial Commission, that is currently running two pilots in the Northeast of England and in central London. In Belgium the KLIP initiative which has been running for a number of years was motivated by the 2003 gas explosion which caused multiple fatalities. In the Netherlands a similar initiative KLIC for sharing the location of underground utilities was augmented in 2018 by a "key registry of the underground" (BRO) which mandates sharing geotechnical information discovered during drilling and construction excavation. There has also been a pilot by the City of Chicago to share information captured during excavation with contractors. An interesting experiment by Bentley placed low cost consumer cameras on a backhoe to photograph exposed utilities and used software to identify and georeference underground utilities exposed during excavation. In another experiment exposed underground infrastructure together with accurately surveyed control points were photographed with smart phones and the images processed to identify and georeference underground utilities.
When viewed together the technical advances in GPR and the initiatives to share information about the underground are evidence of accelerating vendor investment in remote sensing hardware and software. In addition governments are beginning to show serious interest in finding ways to encourage the sharing of information about the location of underground assets.
Question: Why has there been so little change in the GIS industry in utilities and communications over the past twenty years? Why have there been no new entrants in this space?
A fundamental problem is that data has been valued less than the physical infrastructure itself. The challenge of changing traditional construction practices is another. I also believe that there has not been an awareness of the cost of unreliable GIS data among utilities and regulators. But with the growing interest in developing digital twins of network assets there is a growing awareness of the importance of accurate, real-time GIS data among regulators and utilities and telecoms. The example of ANEEL in Brazil is evidence of how regulators can contribute to changing this. As I have mentioned previously enlightened procurement policies can also have an important impact. Another factor that is harder to change is a liability structure relating to utility damage during construction that provides little motivation for telecoms and utilities to improve the quality of their location information about their underground assets.
Audience question: What are the principle differences between the GIS requirements for telecom and utility network operators? What can these different industries learn from each other?
In the U.S. since the breakup of AT&T, the development of cell technologies and the rise of the internet, telecom (including cable) has become extremely competitive. This has engendered a focus on the customer that is still rare among electric power and gas utilities, and even more so among water utilities. This has led to the explosion in smart phone capabilities and smart phone apps. It also means that telecoms are reluctant to share information about the location of their network infrastructure. This makes them reluctant to share information about their underground networks without assurance that this no more of this information is shared than is necessary. Electric and gas utilities are generally more cooperative and more willing to share data. Another major technical difference is that large telecom companies have led most industries in the development of IT technologies - remember the developments at Bell Labs. Most electric power, gas and water utilities have not had much IT capability in-house, which is why there is high level of interest among small to medium sized utilities in software as a service (SaaS). Another technical difference is that telecom data models are more much complex than utility models. My rule of thumb has been, telecom data models involve hundreds of classes, electric maybe a maximum of 150, and gas and water less than 50. This is changing with the rise of the smart grid with utility networks evolving to something closer to communications networks.
This suggests that utilities can learn a lot from the IT expertise of telecoms and in particular with the growth of smart grid from the complex data models and supporting applications developed by telecom firms. They can also learn from the consumer-oriented focus represented by smart phones and smart phone apps. Telecoms could learn from utilities about the benefits of sharing data, especially about the underground.
Audience question: What is the role of governments to improve the accessibility and interoperability of utility
As I have mentioned there is a growing awareness of the social and economic cost of not knowing where network assets are located or their condition. Collecting and sharing this information does not necessarily require government involvement, but there are reasons why governments should be involved. As I mentioned this is a sensitive issue for the very competitive telecom industry. Any policy has to protect critical competitive information. Secondly it can't simply push all the cost off on construction firms who already suffer from very low margins. In the U.S. an important additional reason for government involvement is the Critical Infrastructure Protection act which restricts access to location and other information about network assets. As I have mentioned a growing number of governments have gotten or are getting involved in encouraging or mandating the sharing of information about network assets. Furthermore in some countries there is discussion underway about a national infrastructure map (U.S.) or a national digital twin (U.K., Singapore and others).
Key takeaways
- There are important technology innovations among them reality capture for above and below ground and machine learning to detect and assess network assets that are poised to dramatically change the utility operating business.
- Governments are recognizing the economic and social costs of not knowing the location of network assets and are implementing policies to encourage or require the sharing of location information about network assets.