In my last blog I described four of the major problems organizations responsible for maintaining network infrastructure face. These include the "as-built problem", providing field staff access to timely, reliable infrastructure data, enabling field staff to provide feedback in the form of red-lines or markups, and business processes that are not designed to optimize data quality.
This time I would like to describe what Brad Lawrence of ENMAX Power Corp has done, because he has been a leader in first recognizing these problems and secondly in implementing a practical solution.
Quite a few years ago the City of Calgary, which is in western Canada and has a population of around 800,000, in a very forward looking bylaw required all utilities and telecoms within the city to submit data showing the location of their network infrastructure to a consortium call JUMP, Joint Utility Mapping Project (I think). The actual data format required was DGN files, which tells you how long ago this was. What this meant for ENMAX was that they had to create digital information showing the location of their network infrastructure, conductors, transformers, poles, and other equipment. In a very forward looking decision, they decided to store all of the digital information including location in an Oracle RDBMS (relational database management system), which means that right from the beginning they had the concept of a centralized network infrastructure data store. This made things much easier for them later. If I remember it was a multi-year year project to collect the information. They quickly realized that the digital information would allow them to provide timely, reliable information to their field staff (linesmen and cable locate staff). Besides generating DGN files every month, they decided to also generate maps for their field staff to use in the field. Initially these were microfiche and every truck was equipped with a microfiche reader. They had to produce and maintain 50,000 microfiche maps, so this was expensive ( I think it cost over 1 $million annually) but it meant that the field staff had timely and reliable maps showing the location of ENMAX's facilities. This made their job easier and made them much more efficient. The other thing that ENMAX did which I find absolutely remarkable in its prescience is they instituted a policy that red-lines or markups from the field were to be treated as high priority by the records staff and they guaranteed that they would be captured in the Oracle RDBMS within 24 hours. Just to make this clear, I'll repeat 24 hours, not the days, weeks, or months which would be typical of most utilities and telcos (if they have a feedback loop at all). Most recently, ENMAX has implemented a new integrated solution that addresses the as-built problem, so that construction drawings no longer need to be re-digitized into the central Oracle RDBMS. Previously ENMAX had used Microstation for engineering design and generating paper construction drawings. The drawings were used by the construction team to build new facilities and were returned as marked up drawings to the records folks who digitized them into the Oracle RDBMS. If I remember there was team of six or so people responsible for this and of course there was a long backlog. The backlog meant that the Oracle RDBMS was always out of date.
Let me suggest some of the business benefits that ENMAX realized. First of all, improved quality of service. A concrete example that every utility or telco can relate to is cable locate ( or call before you dig.) Prior to implementing their geospatial system, ENMAX maintained a fleet of 20 or so vans and roughly the the same number of staff whose sole responsibility was cable locate, which means going out to the site of a proposed excavation to identify where ENMAX's electrical cables were so the excavators could avoid digging them up and causing a power outage. I would estimate this might have cost ENMAX about $1 million a year in direct costs. in addition for very 1000 calls they received from folks planning to excavate, on average there were nine incidents or "hits", where the excavator dug up a cable and caused a power outage. This led to indirect costs associated with restoring power. After ENMAX implemented their geospatial system, the "hit" rate dropped 36-fold to 0.25 per 1000 calls. In addition because there was a single JUMP data store containing the location of all facilities in the city, they were able to reduce their cable locate fleet to something like two vans instead of 20, which saved ENMAX a lot of money.
Secondly, the regulator was much happier because they saw an improvement in the quality of ENMAX's facilities data. Every year they require ENMAX to undergo an audit, where an independent auditor takes a sample of ENMAX's facilities data out into the field and compares it with reality. The auditor reported something like 99.6% reliability, which is remarkable given that 70-80% is more typical of the industry. The regulator likes this because they have confidence that ENMAX knows where its facilities are. Because they can provide better information, for call before you dig, for example, there are fewer outages and the public is better served.
I think you'll recognize that Brad and the ENMAX team have addressed the four major problems that I described in my last blog. A couple of questions come immediately to mind. How has ENMAX been able to recognize the problems so early on ? Secondly, how have they managed to get everyone in the organization onside and supporting these very innovative changes.
I would suggest a few key factors that enabled this to happen. First of all enlightened government legislation was a key instigator. Secondly, there was a single division headed by Brad within ENMAX that was responsible for both records and engineering design. In many utilities and telcos the engineering design team and the records team, often called the GIS team, have different reporting structures, and there tends to be little cooperation between the two. Thirdly, Brad is an ex-linesmen and understood from first hand experience what the field staff needed to help them do their job effectively and efficiently. Finally, IT technology is the key enabler that made it all possible. Some of this was implemented quite a few years ago starting in the 90s, so ENMAX had to implement, usually through consultants, heavily customized solutions.
In this day and age with the benefit of the experience of Brad Lawrence and ENMAX it is much easier to design the technical architecture and implement such a system using commercial off the shelf (COTS) software. To the right I've shown a technical architecture that, if implemented together with the requisite organizational structure and support, would address the four major problems that I outlined in the previous blog. The components of the architecture are a central geospatial RDBMS such as Oracle, a server cache of DWF sheets or tiles generated from the RDBMS that contain maps of the land base and facilities, a standalone viewer with red-line support for viewing the DWF sheets in the field (Autodesk Design Review), an editor for making changes to the central geospatial RDBMS based on the red-lines returned from the field (Autodesk Map3D), a synchronizer that enables the DWF tiles to be efficiently distributed to the field when the field staff connect to the network (for example, we have been using Afaria from iAnywhere), and a DWF cache manager which ensures that the DWF cache is always current.
If you are interested in more information about ENMAX's geospatial solution, Brad has given presentations about this at GITA and GeoAlberta. In recognition of their achievement in this area, ENMAX won an Innovator of the Year Award from GITA two years ago which Brad accepted on behalf of ENMAX.