A new Chevron-led work flow is allowing the oil company to marry both organic field data with physics-based simulation models and machine-learning techniques to arrive at a more accurate prediction of well performance and, ultimately, a reliable production forecast for unconventional oil fields.
Standard production forecast techniques for unconventional asset development rely mostly on field data, which can suffer from limitations in both data quality and quantity. Interpreting subsurface dynamics directly from field observations is also a challenge. Popular methods such as decline curve analysis can be hampered by limited data samples and too many variables. Reservoir simulation depends mostly on finding a good history match for the current field, but this method is resource-intensive and requires certain expertise.
“Everything starts with the data,” said Kainan Wang, machine-learning scientist at Chevron, speaking at the Energy in Data conference earlier this month. “It is the key to successful machine learning.”
Data issues can arise often in unconventional reservoir development. Missing data can become an issue if there are not enough wells in a certain area or specific completion design. Limited time on production also can be a factor. Noisy data issues usually are encountered when the measurements are complex and error prone or too expensive to acquire.
The Chevron work flow augments the data in-hand with reservoir simulation. The simulation creates “synthetic data” in order to fill in the blanks in missing data cases, or tighten the band in the case of noisy data.
“This idea in the machine-learning world is not new,” Wang said. “You generate synthetic data and put it side by side with the real data and analyze with machine-learning tools the mixture of data sets. In autonomous vehicles, you have simulators for the machine-learning algorithms to learn all different types of scenarios, especially the most dangerous ones. In object detection, you can rotate objects or morph them into different shapes.”
In the oil and gas world, seismic interpretation has been one discipline that has seen some early stage success with synthetic data—using synthetic seismic volumes with a synthetic fault and horizon system and training machine-learning algorithms to be plugged into the real data set for fault and horizon detection.
“That’s what we are doing here, but using reservoir simulators instead, and to generate production data,” Wang added.
Combining a physics-based model with machine learning, assuming the machine-learning model is trained well, can result in extremely fast processes on certain applications, such as predicting volumes of future wells based on existing field data.
“One of the key components for a successful data-science project is for the model to be interpretable,” Wang said. “The scientists want to be able to explain the prediction, why the model is saying the well is performing better or worse than the reference wells. One of the biggest advantages of this work flow is, by connecting to the subsurface or Earth models through simulation, we are able to explain why.”
Credit: Google News