PREDICTIVE ANALYTICS IN INTEGRITY MANAGEMENT: a ‘smarter’ way to maintain physical assets

  • 179

    PREDICTIVE ANALYTICS IN INTEGRITY MANAGEMENT: a ‘smarter’ way to maintain physical assets

    Safe and reliable transportation of products is the backbone of pipeline companies. In order to avoid costly and hazardous product leaks, pipeline companies spend considerable amounts of money to maintain the integrity of their assets. Ensuring the integrity of assets, such as pipes, pumping units, meters and valves, requires a robust maintenance strategy that minimizes asset/ equipment failures. In this article, Ashish Tyagi and Jay Rajagopal discuss how predictive analytics can help make asset integrity management more reliable and cost-effective.

    TRADITIONAL APPROACHES TO INTEGRITY MANAGEMENT
    Some people who own cars are so hard pressed for time that they neglect to maintain them. When something goes wrong, they take their cars to repair shops to get them fixed. However, this process wastes time, energy and money when many of these repairs could have been avoided with the proper maintenance. Similarly, companies that manage physical assets, such as pipes, turbines and vessels, have long used a similar corrective maintenance approach to manage and operate their systems. And while they know the risks and costs associated with it, they are often constrained by other higher-priority activities.

    Other car owners are aware that it pays to prevent costly repairs before breakdowns occur. They diligently perform prescribed maintenance activities according to the car manufacturer’s schedule, such as changing the oil every few thousand miles. As a result, the chances of an unexpected outage using this preventive maintenance approach are much lower since inspection and service tasks are preplanned. However, a set amount of money is still spent for such activities and the car owner is left with questions, such as, “Will my car run fine if I delay an oil change for another 1,000 miles?” or “If I drove the same number of miles this summer in hotter, dustier conditions than I did last summer, should I take my car in for service sooner?” A fixed maintenance schedule cannot answer these questions since it does not take into account operating conditions which are key influencers of the performance of an asset.

    THE PREDICTIVE APPROACH TO INTEGRITY MANAGEMENT
    With significant technology advances, companies are much better equipped to remotely monitor assets and put in place a more intelligent system that senses the state of various components and predicts the type of maintenance required based on actual operating conditions. Honda’s Maintenance Minder System is one such example. It shows the remaining oil life and assigns a code that helps the owner identify which service activities should take place during the next visit.

    This smarter way of taking care of assets is the predictive maintenance approach to managing integrity, and while it still encompasses preventive maintenance activities, it does so in a more focused and cost-effective way.

    Predictive maintenance involves the use of continuous or periodic equipment monitoring or prior events to predict the need for maintenance before an unexpected failure actually occurs. This is different from preventive (or planned) maintenance in which maintenance is conducted on a scheduled basis, and corrective (or reactive) maintenance in which maintenance is conducted after a failure has occurred.

    Advantages of predictive maintenance
    Predictive maintenance significantly helps to lower costs, improve operational availability and optimize frequency.

    1. Cost: Predictive maintenance significantly lowers cost in comparison to corrective maintenance, since a catastrophic event could take longer to fix (compared to a preventive maintenance activity), thus resulting in longer interruptions to operations (e.g,. a pipeline that is out of service for a long time).
    2. Operational availability: Since predictive maintenance is planned in advance, it allows equipment to be serviced when it is idle or when the outage is planned, whereas reactive maintenance may lead to costly equipment downtime while waiting for spare parts or skilled resources to become available.
    3. Optimized frequency: Predictive maintenance is typically based on models that take into consideration the current or latest equipment performance, providing a more optimized maintenance frequency. Preventive maintenance, on the other hand, sometimes happens more often than required (resulting in higher costs), and sometimes less often than required (resulting in potentially faster asset degradation).

    PREDICTIVE ANALYTIC TECHNIQUES
    Predictive analytics is not a new concept or field. In fact, predictive techniques date back to the 1600s when insurance companies used historical data to predict risk and use it for underwriting purposes. The concept still holds true today with the fundamental distinctions in techniques more closely related to who performs them. In the field of equipment maintenance, two fundamental approaches include:

    1. Experience-based prediction by individuals: Business subject matter experts (SMEs) have an understanding of past failure patterns and are able to predict potential failures purely based on their experience.
    2. Model-based prediction by systems: Analytical models are created that use historical data as input and provide future failure predictions as output. An element of experience-based prediction is present in models since they need to mimic real-world experiences as closely as possible.
    Table 1

    Table 1: The key differences between the two predictive analytic approaches.

    PREDICTIVE MAINTENANCE—THE MODEL-BASED APPROACH
    Simply put, a model-based predictive maintenance initiative involves gathering equipment and operating data that would be relevant to the analysis, constructing a statistical/mathematical model (typically some form of regression model) that the data fits into, and using that model to extrapolate into the future—thus making predictions about unknown events. These unknown events may or may not materialize, but actual data related to them will continue feeding the models which can then be further tweaked to help increase the accuracy of future predictions.

    Figure 1

    Figure 1: Life cycle of predictive modeling.

    Model-based prediction can further be divided into three sub-types when considering the types of data as well as how frequently that data is fed into the model:

    1. Failure Event Data. Only past equipment failure (or near-failure) events are captured on a timeline to create a relationship between events and time. A suitable regression model is created based on this relationship and future events are predicted.
    2. Operational Data Monitoring—Periodic. Periodic operational data from equipment—such as vibration, temperature, viscosity of commodity flowing in the pipe—is used to create a model that establishes a baseline relationship between operational data and equipment performance. A deviation of the equipment’s baseline performance and actual performance is regressed to predict future equipment failure.
    3. Operational Data Monitoring—(Near) Real-Time. Real-time (or near real-time) operational data from equipment is used to create a model that creates a baseline relationship between the operational data and equipment performance. The deviation of the equipment’s baseline performance and actual performance is regressed to predict future equipment failure. 

    Predictive Modeling Requirements
    Irrespective of the sub-type of predictive modeling being considered, there are a few key requirements for the model to succeed in terms of data, people and technology.

    1. Data
      • Quantity: Just as more experience typically leads to better decisions, similarly the more data there is, the more accurate a model’s predictions are likely to be. Failure event extrapolation and periodic operational monitoring-based models require at least 15 to 20 valid, historical data points under varying operational conditions to provide a semblance of meaningful predictions. Real-time monitoring-based models do not need a lot of history since they can ingest and use the required number of data points within minutes.
      • Quality: The quality of historical data is of vital importance. Random bias or low precision in historical data will skew predictions. The analysis of past data can help identify potential data capture issues and provide a roadmap to improving the data capture process and other data governance processes in the future.
    2. People
      • Business users: Historical failure event extrapolation needs few inputs from business users or technical SMEs. However, periodic and realtime monitoring based predictions need deeper involvement from business users who can guide technology teams on the engineering concepts involved with data point readings in order to interpret their relevance to the prediction.
      • Technology implementation team: Historical failure event extrapolation needs only a basic knowledge of statistical concepts. However, periodic and real-time monitoring based predictions will require business analysts who are not only adept at mathematics, statistics and information technology, but can also grasp the basics of the engineering concepts involved with equipment operations.
    3. Technology
      • Asset/Equipment: Historical event extrapolation needs a smaller technology footprint when compared to operational data-based models that need sensors to monitor operational parameters and transmit them (in real time or near real time if required) to Supervisory Control and Data Acquisition (SCADA) systems.
      • Information (i.e., IT): Historical event extrapolation has minimal software requirements (a database and visualization/presentation layer will suffice in most cases). Periodic operational data monitoringbased prediction models may be created using similar, minimal software requirements with optional statistical modeling tools depending on the sophistication of the requirements. Near real-time operational data monitoring-based predictions have higher software requirements primarily to deal with the acquisition and storage/management of large volumes of data, along with more sophisticated statistical modeling to deal with potentially one new data point every second.
    Table 2

    Table 2: Comparison of modeling types for predictive maintenance.

    EXAMPLE OF PREDICTIVE MAINTENANCE AT A PIPELINE COMPANY
    Meter Proving: What is it?
    Meter proving is the process of determining the accuracy of a meter by comparing its register reading to the register reading of a base meter (prover) which is accurate. Ideally, the meter should show the same reading as the prover. However, due to changes in operating conditions, meters must be regularly proven so that measurement accuracy is maintained. The meter factor is the ratio of a meter’s reading to a prover’s reading under the same operating conditions.

    Why Prove Meters?
    Federal rules require all pipeline operators to calibrate (or prove) their meters upon change of operating conditions that affect the meters’ performance. Factors, such as changes in pressure, temperature, density (water content), viscosity and flow rate, are some examples that can trigger the need to reprove a meter. However, there are no specific criteria regarding the amount of change that warrants a proving, as each meter has its own operational range. If a meter is left unproven, it leads to the following:

    1. Inaccurate billing and loss of productivity:
      • If a meter consistently under measures, the company loses revenues on deliveries and gains on receipts
      • If a meter consistently overmeasures, the company loses revenues on receipts and gains on deliveries

    In either case, if the measurement error is more than a specific threshold, the company is required to send adjustment invoices, which results in loss of productivity.

    1. Revenue loss due to false volume imbalance alarms: Flow rate data from multiple meters in the same pipeline is typically used as a criterion for leak detection. When operating conditions impact a meter’s performance characteristics beyond its control limits, the volume measured per unit time will not be the same across different meters in the same pipeline, which would cause an alarm even though there is no leak. Such alarms may cause the pipeline to be shut down until the reason for the alarm is known, which causes revenue loss.
    2. Unwanted repair/replacement costs: When a meter’s performance degrades below a specific threshold, it needs a repair or replacement. Prediction of meter failure in advance can prevent billing issues as well as costly outages before they have occurred.

    Is there a smarter way to address meter proving?

    • Traditional proving reports show a shift in meter factor since the last time the meter was proved, but does not show whether it is shifting consistently in one direction (i.e., if it is at risk of crossing control limits). In order to pull together this information, either a lot of manual effort is required or an analytics model can be used that analyzes trends based on historical data and can predict when a meter will require repair or replacement.
    • Traditional meter proving is done on a fixed schedule (either time-based or volume-based) which may not be the most optimal approach. An approach that takes operating conditions into account and leverages them in statistical models can help predict when the meter requires reproving.

    High-Level Solution Approach
    The high-level steps shown in Figure 2 can be used to implement solutions to any analytics modeling problem.

    For meter proving, the following basic modeling steps are useful to keep in mind before factoring in specific models for the situation at hand.

    1. Develop Meter Reference Curve. Prove the meter in various operating conditions (or use historical proving data) to create a baseline curve
    2. Calculate Reference Meter Facto. Use the reference curve shown above to calculate a reference meter factor corresponding to the actual meter factor on proving reports
    3. Plot Meter Drift. Using the reference and actual meter factors, plot the deviation (the lower the deviation, the better the meter is performing)
    4. Extrapolate Meter Drift Curve to Control Limits. Develop a regression model of the meter drift and predict when the meter will reach the control volume level (based on current flow rates) and will thus drift beyond thresholds that trigger a reproving
    Figure 2

    Figure 2: Approach to tackling an analytics problem.

    Solution Benefits
    In addition to the benefits of meter proving listed earlier, a predictive approach offers the following:

    1. Cost Efficiencies:
      • The company need only prove when necessary (apart from federally mandated provings)
      • The company need not spend resources chasing false imbalance/leak alarms
    2. Improved Equipment Reliability:
      • The company has the ability to control operating conditions to extend life of meters
      • Timely diagnostic information leads to proactive maintenance

    CONCLUSION
    The pipeline workforce continues to be burdened by administrative tasks, especially sifting through and analyzing data. These impair the ability to accomplish important value-added functions. As the power of analytics continues to be leveraged, it is important to use technology not just to provide people with reports, but to actually perform the burden of analysis and to provide insights and predictions. Employees will then be able to focus more time and effort on important decisions. The future is even more exciting with the prospects of full automation, such as analytics solutions integrated with maintenance management systems that order replacement parts directly with minimal human intervention. But until that day arrives, pipeline companies can begin taking steps in that direction. Doing so offers firms an opportunity to leverage the benefits of predictive analytics, stay ahead of the competition and make their workplace a much better environment for employees.

    The Authors
    Ashish Tyagi

    Ashish Tyagi
    is a Manager with Sapient Global Markets and leads analytics engagements for midstream clients. He has helped large energy and investment banking clients with data analysis and modelling, data migration and application performance management.

    Jay Rajagopal

    Jay Rajagopal
    is a Director with Sapient Global Markets and focuses on building and executing strategic initiatives for midstream companies. He has led several large advisory and technology implementation engagements across the midstream value chain including pipeline management systems, gas utility systems and the implementation of trading packages.

    Resources

    1. Preventive Maintenance Strategies using Reliability Centered Maintenance: http://oce.jpl.nasa.gov/ practices/pm4.pdf
    2. Honda’s Maintenance Minder System: http://owners. honda.com/service-maintenance/minder

    Leave a Comment