In Analytics

I recently presented an Energy Central webinar on digital asset management, along with Bill Ernzen of Accenture. Unfortunately, the webinar took up all of the allotted time, leaving no time for Q&A with the audience.

Energy Central kindly shared with me the questions that participants submitted during the webinar. I’ll do my best to answer those questions here. I’ve adapted some of the questions to make them work as part of a blog post.


The focus of your talk is based on distribution utilities. How do concepts presented apply to the working environment of transmission utilities?

In the Asset Intelligence section of the webinar, our software demonstrated the capabilities in scoring the condition metric for large transformers and charting the dissolved gas trends over time on a Duval triangle, along with the DGA metrics. Similar capabilities are built into the software for high voltage circuit breakers, tap changers and generation step up transformers, among many other asset types.

Do you see any difference in adoption of digital asset management across gas, electricity or water companies?

All utilities that installed their asset in the 1960’s and 19070’s face a unique challenge, whether they are electric, gas or water. Though gas, water and sewer utilities operate differently than electric, the underlying pain points are very similar. Our software has accommodated their need in how it performs the analysis. The software is structured to perform analysis around business-centered needs, and method applies to electric, gas and water utilities. The method builds from asset health indices to probabilities of failure to risk scores, which are then used for maintenance prioritization, replace versus refurbish analysis and capital planning. The presentation and analysis capabilities also are very similar, except for specific regulations that change some calculations.

Is there any dollar value on what digital asset management can help avoid as ‘risk’ or prevent as ‘avoided cost’?

Putting a dollar value on avoided cost is fairly straightforward. For example, if you can successfully postpone spending money, then you can use your cost of borrowing (weighted average cost of capital) and the inflation rate to calculate the financing costs you didn’t pay because you didn’t do the project.

Putting a dollar value on avoided risk is a little more abstract. Let’s say that an aging power transformer represents an economic consequence of $10 million, should it fail. Assume that, before you start your digital asset management project, that transformer has a ten percent probability of failure. You could describe the transformer’s risk as ten percent of $10 million, or $1 million.

If digital asset management practices directed you to refurbish the transformer to reduce the probability of failure to one percent, then you lowered the risk by $9 million. If your refurbishment project cost you $4 million, then you realized a 225 percent ROI in terms of risk reduction.

How do you define criticality? Aren’t you mixing probability and consequences in your definition of criticality?

In our definition, criticality is the resulting consequence should a failure occur. Consequence can encompass lost revenue from power outage, cost to replace damaged equipment, crew wages to restore power and replace equipment, and other costs. A unique feature of our software is that it runs a connectivity analysis through a topology processor to identify upstream and downstream assets and impacts thereof.

Probability is the likelihood that failure will occur, regardless of the consequences of failure. Probability is based on the age of the asset and its condition, load factor, network relationships and other considerations.

Risk is the product of failure probability multiplied by criticality.

How precisely do you compute your risks?

Where the data for calculating the asset health index, probability of failure and criticality are quantified and precise, our calculated risk metrics are precise.

How do you take “expert feeling” into account?

Our software provides flexibility to customers to tune the asset health index, criticality and probability metrics to match their knowledge and experience by allowing them to modify the factors and weightings in algorithms.

How do you maintain temporal consistency when you have very fast data streams such as PMU and inspection reports which may be once a year?

Our software uses the most recent applicable data in calculations, regardless of its comparative frequency. Where you get into trouble is when you have outdated data, whether it’s monthly data that’s a year old or hourly data that’s a week old. This is why our software computes two additional metrics/indices called Completeness and Confidence. The Completeness index identifies if any data sets were unavailable for computation while Confidence measure if the data sample expected at a point in time was received before the indices were computed. This can be used as an indicator of data quality, data availability or a missed inspection cycle.

Is there a way you can estimate the sensitivity of the risks to your entire system?

Our criticality scores incorporate network connectivity information, and therefore reflect impact on the entire system. Let’s assume that, through an oversight in network design, you have a new, small transformer that sits at the nexus of your entire network and has no redundancy. That transformer could have a very low probability of failure, because it’s brand new, and a sky-high criticality score because it’s the linchpin of your network.


Ajay Madwesh is Vice President of the Utilities Business Unit at Space-Time Insight. He possess more than 20 years of experience in software development and technology management in Utility and Process automation environment, and has spent several years evangelizing the integration of real-time operational technologies with IT. He has previously held leadership roles at top companies such as GE, ABB and Infosys.

Recommended Posts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.