2008November/DecemberSafety and ESG

To avoid well control events, 21st-century drilling teams must recognize, repair system instability

By John J van-Vegchel, University of New South Wales

Well control deals with the detection of hydrostatic instability within the wellbore. The success of well control rests on three actions:

1. The ability of the drilling crew to detect hydrostatic instability in a drilling system on the surface or in the wellbore.
2. The closure of the wellbore using a mechanical device to temporarily restore a pressure balance between formation and hydrostatic pressures.
3. The re-establishment of primary well control through a mechanism of control by increasing the hydrostatic pressure in the wellbore.

When dealing with well control, one is forever reminded of the consequences of not adhering to proven procedures and following the pathways of control set by previous generations. Yet blowouts and well control incidents continue to occur with alarming frequency. Is this a mere consequence of the increase of drilling activity or are there other issues that need to be discussed?

Industry transformation

The complexity of drilling operations is growing exponentially, and operators are venturing into areas of exploration that a decade ago were considered beyond the scope of modern drilling. Four decades ago, drilling professionals moved onto a location, drilled a well, discovered hydrocarbon reserves, produced the wells, then left the site to continue the cycle of discovery elsewhere. Drilling crews were exposed to the multiple facets of drilling and production and gained a broader understanding of the processes involved in the discovery and production of hydrocarbons.

Modern drilling and production activities have been separated into different camps of expertise, with each discipline sub-divided into sub-groups of speciality. Independent service companies, representing each of the sub-groups, focused on their products and trained their personnel according to their needs. Expertise in these areas was isolated from the daily operational activities of the drilling crew and fragmented a once holistically trained drilling crew into multiple areas of speciality.

The drilling industry has been transformed. Today we witness an industry exploring for oil and gas at an unprecedented scale using human resources derived from multiple engineering disciplines to achieve the common goal of delivering energy resources to consumers.

The complex environments that we work within, coupled with the low tolerances for error, means that each level of operational responsibility must be aware of decisions made by others. Drilling teams must learn how to detect operational instability whenever it occurs across multiple levels of operation and restrain its development to safeguard the overall operation. This article will discuss possible causes of instability in a drilling system and look at the human aspect of system failure.

It’s Art, craft, science

For the purpose of this discussion, the term “system” will be used to describe a practical engineering environment operating within normal working parameters using recognisable variables. “Instability” will be used to describe when a system begins to deviate from its normal range of operation. This change begins through what we call a “transition horizon.” Subtle signs may go un-noticed and increase chaotic behaviour, culminating in a total loss of control and thus producing a disaster.

There are three dimensions to oilfield operations. The design of a well to be drilled, taking into account all of the complexities that may present themselves, requires engineering expertise and is therefore a “science.”

To physically drill the well using a variety of complex machinery while maintaining both control and operational tolerance is a skill and therefore a “craft.”

To communicate among all levels of operational skills so that the flow of information is unrestrictive and without distortion or embellishment is an “art.”

Very rarely are individuals accomplished in all three dimensions. Over time, however, one’s skills will intersect with other dimensions, thus broadening the knowledge base. Expertise therefore consists of procedural knowledge and “case knowledge” woven within the fabric of communication that can be harnessed within a collective team setting for the purpose of identifying instances of instability.

Teams composed of individuals representing each operational dimension are better suited to detect instability in a system. Teams can work from a larger knowledge base, provided that they draw more from practical experience than from a solely theoretical background. A team able to resource their “hands-on” members as much as their scientific members will produce decisions more clearly defined than those produced by one “who knows ‘everything’ about a clearly delimited sector.” This process is known as a collective work product, where tangible results derived from several members of a group who apply their different skills far exceed those achieved by any individual member.

Human rationality is subject to limits, and these limits are subject to the environment in which a given decision-maker is placed. Teams are able to question and explore limits and boundaries more closely than an individual as long as levels of expertise exist within the group. Otherwise, the team will not be able to detect instability, and this will lead to out-of-control systems.

Today’s Drilling Environment

The beginning of the 21st century has witnessed an unprecedented demand for energy that has mobilised the drilling industry into never-before-seen levels of activity. The demand for oil in the first quarter of 2007 was 85.36 million bbl/day of oil while the industry could only supply 84.15 million bbl/day.

Large numbers of new drilling rigs are being commissioned to explore for more resources in order to meet demand. However, the problem arises that each new rig require a core of experienced personnel whose training takes longer than the time it takes to build a drilling rig. The present pool of experienced personnel is being diluted to fill vacancies on new rigs while promotions of inexperienced staff are being accelerated.

The loss of key personnel and the accelerated promotion of individuals lacking the “craft” erode the knowledge base from which a team draws heavily in order to enhance the ability to detect unstable systems.

Team members with limited knowledge will encounter difficulty recognising warning signs related to system instability. They will also experience an inability to function normally if there is a rapid system failure and if a stressful overload of events occurs.

If roles related to system instability have never been practised under stress, individuals may be overwhelmed by intense and confusing stimuli during a crisis. Such a situation occurred during the 1988 Piper Alpha disaster in the North Sea. Red Adair commented that confused workers in the accommodation units did not try to evacuate the facility because the evacuation alarm had not sounded, even though fire and explosions were occurring all around them.

Confusion can be alleviated only when training instills knowledge and experience to replace doubt and ignorance. A loss of key personnel within a team environment can cause a loss of trust between remaining team members and any new inductees. Morale within the team is lowered.

A loss of communication between individuals within the team environment also poses a problem in today’s drilling environment. The collective dynamics of any team and the ability to detect instability within a system is a function of the precipitating event or events behind the instability. The demoralisation of inexperienced individuals, whether on racial or individual grounds, will sever communications between the parties and lower a team’s ability to focus on a problem.

Today’s work environment is dynamic. Tensions exist between individuals, but that must not translate into the workplace or it may erode workers’ ability to detect errors that may lead to a breakdown in communications.

A lack of team trust in an information source can be further compounded by the erosion of the team’s collective knowledge base. This may lead to signs of instability remaining undetected until the critical horizon is reached and a crisis situation results. Drottz and Sjoberg said, “If we receive information or detect danger with our own senses or estimate risks on the basis of our own knowledge, we have the independent means to establish facts or the truth. If not, our judgement and reaction will have to rely on mediated information.” Seminal research carried out by Herzberg, Mausner and Snyderman found that “the most frequent failing of a supervisor cited as a reason for low motivation to work was his (sic) lack of competence in carrying out his (sic) function.”

Individuals occupying positions beyond their proven ability will experience an increase in levels of stress that will inhibit their ability to learn. Janis and Mann found that when stress or anxiety levels were too high, the person became panicky and therefore engaged in desperate moves just to get out of the situation as quickly as possible, effectively destabilising a team. The individual may then avoid making decisions, or make decisions purely as a gut reaction.

Knowledge erosion and a lack of experience may also lead to a condition of “it-can’t-happen-to-me” syndrome, known as analysis paralysis. Paralysis and immobility result from contradictory impulses towards action and interfere with accurate perceptions of the true causes of instability. Facts may be twisted to suit the individual’s own rationale while the true causes of instability are ignored or unidentified. Teams must avoid paths of least resistance that may be presented by under-prepared team members. It is not so much the actual crisis that causes problems but instead the team’s reaction to the crisis.

Training is one means we have at our disposal to increase core knowledge and focus attention on conditions that lead to instability in a system. Yet training with the traditional objective of teaching learning outcomes is both too simple to follow and unlikely to be valid. A transition from a learning outcome-based training system to a problem solving- or case-based system approach is suggested. The idea that knowledge taught in courses can be effectively used later in the world outside the course, via re-calling taught principles and then applying them, is, unfortunately, vague, logically cumbersome and empirically incorrect.

The transference of knowledge to the workplace should focus on a person’s ability to understand the content and underpinning knowledge related to a problem and the practical application of a solution using real-life applications and case studies.

Identifying Instability

Undetected instability of any system will lead to a crisis. The complexity of a crisis is generated from a gradual rise in systematic errors that we call a “transition period” that, if undetected, breach the “point of critical mass.” A system beyond control and in a state of crisis will produce a disaster. Accelerated promotion, retirement or the departure of individual team members to more lucrative operations lower a team’s ability to detect faults in systems.

The first signs of instability are often hidden or concealed within the background noise of the operation. Inexperience within a group may cause members of least knowledge to cluster within the team, causing its own background noise that may inhibit the flow of information to other team participants. Faulty decisions may derive from this sub-group that miss or over/underestimate warning signs.

The ability to deal with a crisis situation is dependent on the team structure developed prior to chaos. A team with a craft-engineering balance may avoid negative pitfalls experienced from faulty decisions by developing strategies and procedures to identify and investigate instances of change within a system. The collective memory of the team will therefore grow and help find more initial landmarks needed to identify the early warning signs of instability before the point of critical mass is breached.

To identify instability within a field operation, one should revise and test the ability of teams to measure the quantity and ensure the quality of information gathered during diverse operations. Teams should be made aware and understand the disasters that may result from a lack of understanding, as well as the conditions that generate problems initially.

The assessment of team performance and the identification of a team’s characteristic behaviour during organised activities can lead to the recognition of weaknesses in a sociological sense that may inhibit a team’s ability to function during control operations. It is therefore important that a team exhibits a very open, learning mindset, reflecting a high degree of trust among members. The team is likely to make fewer mistakes when the collective level of knowledge and experience increases relative to the task at hand.

Risk assessments

Risk assessments undertaken before any operation can highlight the potential for system failure. These valuable assessment tools can be weakened if those undertaking the assessment lack the technical expertise to recognise potential impending dangers. “Tick and flick” check sheets or diagnostic computer programmes designed to identify hazards do not highlight underpinning dangers that may only be identified by experienced operators.

One trainer/researcher said, “We often see that automated thinking tools tend to block people’s capacity to see or know the broader context of the problem they face.” However, it is often the hazards that one did not plan for that lead to systems losing stability. Fertile ground for instability in any system is often through repeated failures not being acted upon, warnings given and not taken into account and institutional vacuums that arise from a loss of collective memory.

There is at least one person in every organisation who knows of an impending crisis. The problem is that those who often know most about it are the ones who have the least power to bring it to the attention of the organisation.

Warning signs

The ability to detect instability lies in the interpretation of warning signs, or signals that may indicate when a normal operation system is starting to weaken and enter a period of instability. Long before they actually occur, all crises send out a trail of early warning signals. All teams must constantly scan their operations’ external and internal environments for early warning signs. The factors and effects involved in long-range sequences of events differ from those in short-range sequences. When a crisis event is reached, the causes often have foundations in actions preceding the event. The basic issue of control is being able to detect low-intensity signals of instability when high background noise exists.

Overconfidence

Overconfidence in an organisation’s capacity to deal with operational fluctuations may erode its ability to detect instability. Operations may continue using detection skills that have been used in the past, and these operations may feel that the need to change is unnecessary owing to their past success rate. Mitroff recognised that organisations “get into trouble not because they are a failure from Day One, but because they are an overwhelming success for so long. As a result, they repeat unthinkingly the same actions over and over again, even though the environment conditions that made their initial actions appropriate no longer apply.”

This effect is witnessed when teams working in a closed geographical locality suddenly find themselves in an environment where the old rules and experiences no longer apply. Adaptation to new processes must be appropriate and based on concept understanding and not repetitive behaviour; otherwise, instability will go undetected. Every effect has multiple causes, and each set of causes produces multiple effects.

Therefore, the teaching of critical thinking skills is essential and allows one to anticipate problems before they manifest and become impossible to rectify.

Transfer of information

The ability to detect instability in a system also requires a vigilante adherence to the transfer of information in an accurate and undistorted manner. Instability can arise from communicating observations that are inaccurate due to misinterpretation of signals by the observer due to a lack of underpinning knowledge related to the cause. Therefore, each team should have as one of its members an individual who comprehends the complexities of the science so that observations and signals can be correctly analysed without the deletion of information. This may occur if a single observation if left to the one person to make sense of. The sharing of information between the science and the craft can sharpen the signals into a meaningful whole.

With the passage of time, the number of inaccuracies is apt to increase, especially when the information is passed from person to person. The global nature of the drilling industry dictated the eventual rise in multicultural teams, which now represent a majority of international operations. The assimilation of information within a multicultural team may relate to individual and cultural biases.

The restrictions of language and the need to make the information intelligible to others may lead us to simplify the experience and thus distort the importance of the information so that action may not follow.

Focusing attention on team inclusion and not exclusion in the decision process will provide multicultural team members with the opportunity to test their understanding of the situation while gaining cultural acceptance.

Leadership, defensive barriers

Ciulla refers to leadership as a complex moral relationship between people, based on trust, obligation, commitment, emotion and a shared vision of the good. In the drilling industry, the lack of leadership within a team can lead to team members moving away from critical observance of the drilling operation and a failure to recognise warning signs of instability.

Leadership is not a process where team members are rewarded for carrying out and accomplishing agreed-upon objectives. Leadership is a communications challenge where the leader should try to motivate members of the team to comprehend signs of instability while empowering them to make decisions if instability is detected. Creative solutions cannot be the domain of any one person. Information from a variety of sources is required in order to make a sound judgement. Any individual effort to initiate a creative solution using a narrow range of information is dangerous.

The best decisions are based on the best information available, and this information must come from all team members. Participant leaders who consult with others and promote collaboration within the decision-making process will experience a healthier outcome.

A collaborative and proactive team that recognises each member, not as a subordinate operating under a fixed leader, but a leader operating among a team of leaders, will positively affect a team’s ability to identify instability. Each member becomes a defensive barrier that blocks detection failures and assists in the maintaining system integrity.

The “Swiss cheese” model of incidence occurrence best illustrates how multiple barriers can reduce the possible of warning signs of instability going undetected.

This model was developed by English psychologist James Reason, who views defensive barriers like layers of “Swiss cheese” that contain many holes. Unlike the cheese, however, these holes are continually opening and closing and shifting their location on each layer or barrier. A critical incident may penetrate the first barrier but will generally be blocked by the next barrier because the holes do not align between the first and second barrier.

In a collaborative team environment, each team member is empowered to question and remain proactive in the process of detection. The lack of detection is reduced as they individually represent individual barriers for the detection of instability in a system. According to Dr Reason, “the presence of holes in any one ‘slice’ does not normally cause a bad outcome. Usually, this can happen only when the holes in many layers momentarily line up to permit a trajectory of accident opportunity – bringing hazards into damaging contact with victims.” If team members are not permitted to interact with the processes involved in decision-making, then each human defense barrier will closely reflect the views and opinions of the one leader. They will shadow the holes in the first barrier. This latent condition can create long-lasting holes or weaknesses in the defensive barriers of a system and induce failures.

If an integrated balance between leadership and team participation is achieved, then the performance of a team and the team’s ability to use its collective knowledge to detect unstable drilling conditions will improve. A team should be able to vary its composition, behaviour pattern and leadership approach to optimise and better integrate individual, team and non-team performance. The secret to a better balance lies in learning to integrate the collective disciplines required for team performance with the discipline of single-leader behaviour, not in replacing one with the other.

Leaders, through their conduct and teaching, must try and make their fellow team members aware that they are all stakeholders in a conjoint activity that cannot succeed without their involvement and commitment.

There are three main obstacles that may affect an individual’s team participation and thus lower his ability to act as a defensive barrier against instability. The first obstacle deals with the failure to adequately define a member’s role within a team. Team members’ operational responsibilities should be delineated so that the individual is empowered to act while sharing an equal level of authority within the group.

Secondly, different groups in which the individual has a role may require their participation at the same time, creating a “role conflict.” Thirdly, the failure to motivate the individual when they are not directly affected to perform a role may lead to complacency.

Conclusion

The critical ability of an individual to detect the instability of a system is no assurance of a valid judgement when adequate facts are not available or when deductions are based on premises subsequently not borne out by evidence. Conditions that lead to the instability of a system center around a team’s ability to detect subtle changes in operational parameters and act on these warning signs. A team’s collective dynamic response to instability is enhanced when each member is empowered with a clearly delineated role to act as a leader if their area of expertise is called into question.

The loss of collective knowledge within a team can reduce this expertise to dangerous levels and cause a team to become dysfunctional. Levels of inexperience can be tolerated within a team as long as the collective knowledge of the team is shared and the inexperienced team member is not abandoned to act as the only defensive barrier between instability and disaster.

John J van-Vegchel is senior lecturer and deputy director of the IADC WellCAP-accredited National Drilling and Well Control Programme at the University of New South Wales. He began his career in the 1970s with the Geological Survey of Queensland and has worked as a driller and rig manager both onshore and offshore in Australia, Southeast Asia and China. He joined the School of Petroleum Engineering at the University of New South Wales in 1998 to set up and deliver training for the school’s National Drilling and Well Control Programme.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button