2019Drilling Rigs & AutomationJuly/August

BOP case study: Data analytics used to drive efficiency in anomaly detection, maintenance and design

Same methodology can be applied to many BOP and other equipment components to reduce operational risks and total cost of ownership

By Javier Franco, Baker Hughes, a GE Company

We are well into the fourth industrial revolution, Industry 4.0, which is marked by the widespread adoption of automation and data exchange in manufacturing industries and in society at large. One need only look at their phone, computer, TV or vehicle – many of which are now embedded with “smart” technology – to realize that we are all part of this revolution. By some estimates, as much as 90% of the world’s data has been generated in the past few years, and advances in smart devices have been a key contributor to the amount of data generated.

While the oil and gas industry has been lagging industries such as automotive and aerospace in its adoption of Industry 4.0 principles and processes, there are clear signs that it is following suit. Whether it is changes in regulations or a need to improve operating efficiencies in today’s lower-for-longer oil price environment, companies are driven to make better use of the data collected.

OEMs, contractors and operators are all leveraging more data to help extract value – an effort that has been enabled by data acquisitions systems. Such systems were originally considered as mere equipment add-ons to acquire and store data. But as the value of data becomes more apparent and the need for detailed data analysis more critical, the industry will increasingly rely on data acquisition systems in the same manner that smartphones have become a part of our everyday lives.

Figure 1: A simplified schematic of the system analyzed by BHGE for an offshore contractor.

Simply acquiring data, however, does not provide value if it just resides on a local server on the rig or even in the cloud. An effort must be made to mine the large volume of data and generate insights. Even then, the true value of these insights is wasted unless action is taken.

Oil and gas operators and service companies alike are realizing that they cannot act on insights by simply putting more people on the rig floor. They need personnel with specialized skill sets to help extract the full value of the data they collect. As a result, companies are recruiting personnel with data science expertise to help with the data mining and insight generation efforts.

The days of using spreadsheets for large data analysis are gone. Like the tech and finance sectors before it, the oil and gas industry is now deploying the latest technology stacks and machine learning techniques – in most cases, open source – to facilitate the value extraction process. As a reference of the industry’s growing interest in this space: The number of attendees at a leading digital oil and gas-focused conference has effectively doubled each year for the past two years.

While data scientists play a key role, they alone do not always have the oilfield application expertise and experience to find practical, value-added solutions to operational challenges. It is the combined effort of data scientists, subject matter experts and field service personnel that provides the key contribution to the data analysis process. Companies are building teams comprising these diverse skill sets and technical knowledge to generate quality insights that lead to actionable decisions.

Data’s Three Dimensions: Application to Case Study

Data is typically discussed in three dimensions: variety, volume and velocity. The industry has been making great progress in all three. Depending on the particular problem one is trying to resolve, however, it is often not necessary to take each dimension to its maximum limit. Take data volume, for example: If a performance indicator changes over a period of months, very little additional value is gained by capturing a large volume of data at a shorter time interval, such as 1-Hz resolution. In this case, collecting more data will not necessarily bring the operator any closer to solving the problem.

The variety of data has been expanding in the oil and gas industry. In 2016, the IOGP/IADC BOP Reliability Joint Industry Project (JIP) launched RAPID-S53 to collect data on a wide array of well control equipment (WCE) incidents. The volume of data has also increased significantly as more contractors, both inside and outside of the United States, have begun reporting incidents for their equipment. This event data, in conjunction with time-series data collected from the rig, has facilitated the identification of failure signatures in time-series data. The following field example illustrates this point.

Figure 2: In an analysis conducted by BHGE, two regulators in the same BOP system showed significantly different Life Consumption Metrics, even though in an ideal situation, the two regulators would age at the same rate. It can also be seen that the rate of change was increasing for Regulator 1. The analysis showed a diminished operating life for Regulator 1. It also confirmed for the contractor that there was a leak of a component located downstream of the regulator.

Regulators are instructive indicators of system health. By monitoring various signals, a diagnosis of various failure modes can be made. In one such analysis for an offshore contractor, Baker Hughes, a GE company (BHGE), analyzed these signals over a period of three months and was able to successfully generate insights.

The contractor had originally approached BHGE with the challenge of identifying a leak that was discovered during a scheduled maintenance. A subsequent data analysis confirmed and identified the source of the leak. But it also uncovered another system problem that was unknown to the contractor and was not easily detected with the tools available to the field personnel.

Figure 1 illustrates a simplified schematic of the system, which contained two regulators. In an ideal case, Regulators 1 and 2 would age at the same rate, assuming a balance of utilization and no downstream anomalies. However, the Life Consumption Metric for Regulator 1 was approximately 10x greater than Regulator 2 during the same period of time (Figure 2). The rate of change was also increasing for Regulator 1 (the red line), indicating that the anomaly was getting worse as time progressed.

This higher-than-expected life consumption rate lead to two insights: 1) the operating life of Regulator 1 had significantly diminished, and 2) a leak of some component downstream of the regulator was confirmed. Without the scheduled maintenance, the downstream leak would likely not have been discovered. While in operation, the component was never suspected of leaking as there were no indicators that a human could identify with the available tools.

If the contractor had merely addressed the leak without being aware of the regulator issue, they would have resumed operation and likely faced a regulator malfunction, or outright failure, at a later date. The only likely solution would be to pull the stack and replace the regulator. Depending on water depth and availability of replacement parts, this option could end up costing millions of dollars in nonproductive time (NPT).

However, the real value to the business came from acting quickly on the insight generated by the data. Discovery of the regulator life degradation was well-timed, as it was found during an end-of-well service. The regulator was easily pulled and replaced prior to deploying the stack and commencing the drilling operation.

Figure 3: While an experienced subsea engineer might be able to identify a leak without use of data analytics – if the leak became severe enough – that approach would mean the asset is being pushed closer to the point of full functional failure in the above performance-failure curve. This increases operational risks and limits time for maintenance planning.

Could the component leak have been detected eventually without data, assuming the end-of-well maintenance was not planned for the leaking component? A well-qualified subsea engineer might say that he/she could have identified the leak if it became severe enough, simply by monitoring the on-time frequency of the surface pumps. While valid, this approach means that the failure has progressed and become so severe as to be more easily detected at the HMI. The asset would likely be pushed closer to the point of functional failure (Point F in the Performance-Failure curve in Figure 3), a situation that not only increases the risk to operations but also limits the time for maintenance planning or ordering of any necessary replacement equipment.

But by having rapid access to greater volumes of the required data (and subsequently analyzing the signatures in that data), anomalies can be identified much earlier than when a human might perceive them. This capability moves the operation closer to the point at which a failure can be detected (Point P in Figure 3) and gives the user more time to plan their repair strategy, with fewer surprises and less expense.

What about insights into the regulator’s operating life? It is highly unlikely that any single individual could assess the life of a regulator by simply looking at the vast volume of information available in traditional HMIs. While it is common practice in the field to simply replace a component that is known to have failed, rarely is any consideration given to progressive damage of other components. In this case, without the data insight, the regulator would not have been replaced and would have likely failed in the future.

Without knowing that the regulator had experienced progressive damage, the root cause analysis (RCA) would likely categorize the situation as premature failure. Robust data analysis is the only reliable means of identifying the kind of progressive damage suggested in Figure 2.

The third data dimension, velocity, has dramatically increased as BOPs have become connected to the cloud. Users now have essentially immediate access to data, regardless of their location. This speed of access and the ability to run data analytics in real time allows for anomalies to be targeted more quickly and accurately, which empowers OEMs, contractors and operators to make data-informed decisions.

The speed of decision making also delivers value during RCAs. Historically, if a failure happened without rapid access to large volumes of downhole/subsea data, an investigative team with members from the rig crew, service provider, engineering and operations would piece together various types of information to assess what led to the failure. Such an investigation might have taken days or weeks to reach a conclusion, develop a corrective course of action or give the green light to continue with operations.

But now, with the same data instantly available to the rig crew and onshore personnel, interdisciplinary teams can quickly analyze the problem, develop a solution and start execution within hours. BHGE recently conducted such a failure analysis for an offshore operator. The full data analysis was performed and presented to company executives in a day. The executives discussed the findings and made a decision on the course of action at the same meeting.

Without this data, the traditional process would require pulling the equipment, evaluating the failure and running laboratory simulations to try to reproduce the problem event. Such a process might have taken two to three weeks of meetings, analysis and back-and-forth with the service provider to ultimately come to the same decision, but with significantly more downtime.

Data-driven Service Intervals

Traditionally, maintenance of BOP components has been driven by time-based service intervals, in which a component is serviced or replaced on a fixed schedule, regardless of its current working condition or how often it has been used. While there are valid reasons for some components to be serviced on a pre-determined time schedule, it may be equally acceptable to extend the service interval of many other components based on monitoring key performance indicators.

Consider the regulators in the previous field example. Assuming that the two regulators in Figure 2 are pulled for maintenance when they reach a predetermined/specified interval of operating life, then one can conclude that Regulator 2 has much more life left and the service may not be needed. If we replace the service recommendation from a time-based to a Life Consumption Metric with the target indicated by the dashed black line in Figure 2, then only Regulator 1 would need servicing. This type of data decision making is what allows the service interval to be tailored to the need and drives efficiencies in operations. Operators can reduce end-of-well maintenance time, make optimal use of their service crew’s time and lower consumption of spare parts – all of which serves to reduce the total cost of operations.

Data-driven Design

As both the volume and variety of field data increases, OEMs and service companies can better understand the type of environment in which their components operate. Traditionally, component design was driven by factors such as regulation and reliability requirements. The latter requirement is where data can play a key role.

While the best effort is made to create testing protocols that replicate the real world, it is difficult to account for everything that a component will be exposed to – particularly since there are different environments and operational procedures used by different operators. Additional data allows equipment suppliers to fill in the gaps, thus helping to create more comprehensive testing protocols that improve reliability of both future and current products.

Once again, consider the regulator as an example. One KPI is the number of function cycles –the number of times the driven component is functioned from an open to closed and back to open position. Because a regulator relies on tight sealing surfaces and mechanical movement contributes to life degradation, one can observe from field data that function cycles are only one component that impact regulator life. Hence, in order to build better regulator lifing models, or end-of-useful life estimations, one must consider more than just function cycles.

Just as data can help develop comprehensive testing protocols, it can also help to determine if components are being overdesigned or tested to limits they will never encounter in the field. This lends to the opportunity of reducing testing time and/or implementing cost-reduction initiatives, which can reduce the time to market or lead to lower priced products.

Summary

This article has reviewed ways that data can help drive efficiency improvements in anomaly detection, maintenance and design. As the volume, variety and velocity of data improve, additional cases will be explored. While this discussion was focused on BOP regulators, similar improvements have been realized for many other BOP components.

Data science roles will play a key role in the extraction of value from data, but they are only one piece of the puzzle. A pure digital organization lacks the oilfield product and operations knowledge, while traditional oil and gas companies lack the latest skillsets and tools for data analysis. A team approach that combines the best of both sectors – the digital and the oilfield – will be required to realize the full value.

In the field, data will allow contractors and operators alike to operate earlier on the P-F curve and drive smarter maintenance activities, both of which will reduce operational risks and drive down total cost of ownership. Further, the data can be leveraged to close the loop and help drive future products and improvements.

While the industry is known to be conservative and deviate from how it has operated in the past, those companies that embrace data mining and analytics will ultimately see improvements in their bottom line. Such improvements are not limited to regulators. The same methodology can pay substantial dividends to the development and operation of a range of field components, provided the correct volume, variety and velocity of data are available. DC

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button