You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.
To ensure continued access to JPT’s content, please Sign In, JOIN SPE, or Subscribe to JPT
Deepwater oil and gas facilities encounter up to an estimated 5% annual production loss, estimated at billions of dollars, because of unplanned downtime. This paper describes an automated work flow that uses sensor data and machine-learning (ML) algorithms to predict and identify root causes of impending and unplanned shutdown events and provide actionable insights. A systematic application of such a method could prevent unfavorable operational situations in real time using equipment and process sensor data.
An assessment of the magnitude of deferred production resulting from unplanned shutdown from one of an operator’s deepwater facilities revealed that, overall, 43% of all shutdown events belong to unplanned-but-controllable events. Here, “controllable” means that, if the event is identified ahead of shutdown, mitigation is possible to avoid the shutdown. Fig. 1 provides further classification of unplanned downtime contributed by automation hardware failure, equipment failure, process trips, and production ramp-up.
The existing toolkit and systems in place are not always adequate to identify and predict abnormal events that could lead to unplanned facility shutdown. The interaction among process subsystems and disturbances that propagate across them with changing operating conditions are hard to predict without a fit-for-purpose model (or a digital twin).
Often, engineers and operators visualize key sensor data, and, using their knowledge of process and control strategy, first attempt to identify anomalous process behavior on the basis of alarms received. Topside facilities often have alarms installed with minimum and maximum trip settings. As sensor data approach those trip settings, a pre-alarm is generated and the control-room operator is notified. If the trip involves multiple process variables and if the alarms are not rationalized adequately, the operator may be inundated by alarms, because alarms are designed on each individual sensor and not at system level. With good alarm rationalization philosophy and best practices, alarm flooding can be mitigated, but the multivariate precursors will not always be captured by looking at the sensors one at a time. Additionally, early signs of an impending abnormal event would require analysis of hidden process signals before they manifest themselves as alarms. Therefore, an intelligent advisory system is desired which can:
- Ingest numerous sensor data
- Generate a single alarm indicating the health of a particular system or piece of equipment
- Predict abnormal events that could lead to a shutdown
- Potentially provide insight to prevent upcoming shutdown through root-cause analysis
The focus of the complete paper is on a deepwater facility with several oil-export pipeline pumps in parallel and several gas compressors in series. The alarm database showed records of several unplanned shutdown events around these critical components that resulted in undesirable outcomes such as production deferment, complete facility shutdown, loss of sales volume, and increased operational costs.
The authors propose an intelligent prognostic solution using an ML framework for automatic prediction of impending facility downtime and identification of key causative process variables. A systematic work flow was developed to identify, cleanse, and process real-time data for both model training and prediction. Several ML methods were evaluated. Anomaly detection based on principal-component-analysis (PCA) and autoencoder (AE) algorithms were found to perform better for the type of data available for the deepwater facility. The ML framework also supported analysis of underlying downtime causes to propose suitable mitigation steps.
To achieve robust and reliable prediction, multiple subsystems were identified using statistical analysis performed on historical alarm data based on process knowledge. Both process and critical equipment sensor data were included in the ML models for anomaly detection. The ML models were trained on historical records and were set up to predict any anomalous behavior in real time by monitoring patterns in the sensor data in multivariate settings and representing system health by a single indicator called an anomaly score. When focusing on reduction in operational and surveillance costs, proactive detection and diagnostics of unplanned shutdowns are essential. Optimizing maintenance of critical equipment would be of great use in this effort.
The complete paper provides an overview of the topsides process description and details of the current surveillance tools and limitations. Following the problem-statement description, the ML framework for anomaly detection is discussed in detail along with a field application of the proposed work flow to selected process subsystems. Then, the value captured during the testing of the proposed solution is documented. Finally, the deployment framework and conclusions are summarized with potential future research work (Fig. 2).
Case studies in the paper present diagnostic charts. Identified early indicators were found to be in agreement with pre-alarms generated by existing systems, thus validating the ML solution. The paper also describes how the ML framework can be scaled for a sustainable solution that provides prediction every minute; keeps the model evergreen using a cloud-based model deployment platform to train, predict, and trigger automatic model updates; and spans multiple process systems and facilities.
Both the PCA and AE ML algorithms were found to be effective when deployed in an adaptive manner.
- Overall prediction statistics showed that a significant number of unplanned shutdowns was detected by the ML models, with reasonable numbers of false alarms and relevant key sensor data identified ahead of time.
- The data-driven models are relatively less complex to maintain or update compared with physics-based models. Often, the multiphysics and multiscale phenomena are not conducive to modeling using physics-based methods alone. With changing operating conditions, aging facilities, and changes in reservoir pressure or fluid properties, the recalibration of physics-based models is critical—often a complex and time-consuming process. The proposed adaptive ML modeling framework leverages a cloud computing platform for automatic ML model updates. This provides a sustainable and scalable solution.
- Future work should include the following:
- Extension to other subsystems and multisubsystem-anomaly detection
- Use of engineering-based transformed sensor data as input features into ML models
- Automatic fine-tuning of threshold settings based on feedback from end users
- Collection of user feedback for true events and false alarms and the use of that data to build supervised ML models for unplanned shutdown classification and anomaly detection
Credit: Google News