Nabors advances automation of iron roughneck positioning using computer vision system
System was customized to operate in dynamic outdoor conditions; trial on 10 rigs showed 95% accuracy and improvement in consistency

By Stephen Whitfield, Senior Editor
By applying computer vision technology, drillers and operators have the potential to automate various complex tasks traditionally performed by humans. This can reduce manual labor, mitigate risks and accelerate operations. However, existing efforts to apply the technology on the rig still face numerous challenges. These could include extreme weather, differences in lighting due to sunlight and spotlights in the night, and the presence of obstructive materials like mud.
At the 2025 SPE/IADC International Drilling Conference in Stavanger, Norway, in March, Nitanshi Mahajan, Data Scientist at Nabors Industries, presented an AI system that was designed to operate in dynamic conditions, with a specific focus on automating iron roughneck positioning during tripping.
Computer vision is a field of AI that enables computers to understand, process and interpret visual data. A model classifies image inputs into predefined classes and, within those images, can detect individual objects. For instance, it can detect cars and people within an image taken from a traffic camera.
For this application, Nabors sought to use a computer vision model to detect stump height. Traditionally, a driller controls the iron roughneck through a joystick and a human-machine interface, manually aligning the drill pipe with the well center in making a connection. The computer vision system effectively automates this process: Working in conjunction with reference height data provided by Nabors, the system calculates tool joint height and triggers the iron roughneck to move to a calculated position. The driller can then break out or make up the connection from that position.
“The idea is that the driller won’t have to manually do this positioning. They can just sit back and monitor the system. We feel this can help in actually improving the preciseness of the connections, as well as improve the positioning time,” Ms Mahajan said.
System design and training
The system included a high-resolution digital camera connected to an industrial PC. The camera was built to operate reliably in extreme temperatures, both hot and cold, and under different lighting conditions.
It was mounted in a fixed position above the iron roughneck to provide an unobstructed view of the tool joint. Ms Mahajan said the mounting system was designed to remain stable and vibration-free, even during rig operations or in the presence of strong winds. This allows the camera to consistently capture high-quality images and videos, free from motion blur or other distortions.
The industrial PC was equipped with a graphics processing unit. It was hosted within the driller’s cabin, with a giga-ethernet cable connected between it and the camera, and a separate cable connected to the rig’s network.
A primary challenge for deploying any computer vision model on a rig is dealing with fast-changing and dynamic lighting conditions, which can be significantly different from controlled indoor environments. Proper lighting is essential for accurate image capture, but lighting conditions can continuously change throughout the day in outdoor environments.
To address that challenge, Nabors optimized the camera’s operating parameters to maintain a controlled range of light passing through the lens. This meant finding a balance between the shutter speed (the length of time a camera’s sensor is exposed to light) and the gain (the amplification of the electrical signal from the camera’s sensor).
A slow shutter speed allows the camera to capture a sufficient amount of light, even in low-light conditions, but it increases the overall inference time (the time that the computer vision model needs to analyze the image). High gain brightens the image without the need to decrease shutter speed, but it introduces more noise into the image, which can lead to inaccurate analyses by the computer vision model.
Optimizing these parameters ensures that the camera can dynamically adjust to the desired brightness levels. By adjusting shutter speed and gain within a certain range, the camera can rapidly adapt to changing lighting conditions without introducing excessive noise, while also maintaining satisfactory inference times.
Ms Mahajan noted the importance in training the AI model to learn a wide range of features and patterns, a process that involved building a dataset of more than 30,000 images reflecting the various conditions Nabors expected the model to encounter during operations. The images were sourced directly from cameras deployed on various Nabors rigs over a year-long period.
“Because this model uses images as an input, good diversity and readability of the data is very important for the model to perform in real-world conditions. We had to make sure our data accounts for all weather conditions – rain, snow, fog, heavy sunlight. We collected data from multiple rigs and our testing facility, and every rig is different, so the positioning of the camera on a rig can be different. We had to train the data to account for different angles, distances and vertical impacts. That helps the model prepare for most of the conditions that could take place.”
To train the model, Nabors input validation images and a reference stump height to make predictions about object detections.
These predictions were then evaluated against the actual object locations to determine prediction error rate. Based on this error rate, the model’s parameters were updated through various optimization techniques like backpropagation and gradient descent.
Then, the model was tested on a separate set of validation images to make new predictions, which were again evaluated against actual positions to provide a new level of prediction error. This cycle of prediction, evaluation and updated parameters was repeated until the model achieved the desired level of accuracy.
Ms Mahajan noted that, despite the diverse dataset used in the training, the computer vision model could not encompass every possible scenario that might be encountered on a rig – there is always the potential for something unforeseen taking place.
To address that limitation, Nabors applied data augmentation techniques to enrich the dataset and enhance the model’s ability to generalize various real-world situations. These techniques included rotation, scaling, flipping and color adjustments.
The system also featured a user-friendly interface and physical calibration tools that were designed specifically for use at the well center. Detailed step-by-step guidelines for using the system for calibration were developed and provided to the drillers. Additionally, the calibration process included cross-validation with the iron roughneck’s built-in vertical height sensor, providing another layer of redundancy.
Trial results
The computer vision system was installed on 10 Nabors rigs for trials last year. Ms Mahajan said the company looked at two key performance indicators for the system – the speed and accuracy at which the model could position the iron roughneck.
To measure accuracy, Nabors recorded the tool joint height measured by the system and compared it with the final breakout or makeup position as supervised by the driller. Among the collected data – which was comprised of thousands of stands – the system achieved an average stand-level accuracy of 95%. This indicated that only 5% of the stands required manual intervention.
The average iron roughneck positioning time using the system was recorded at 5.53 seconds, which was approximately 25% faster than the 7.37-second average recorded for manual connections on the trial rigs.
Standard deviation of the positioning time – which measures how spread out data points are over the dataset, indicating the consistency of the data – also showed a significant improvement with the computer vision system: 0.62 seconds versus a 1.3 second standard deviation.
“That number – a 0.5-second difference in deviation – looks very small, but if you sum that up over thousands of connections over the course of a year, that can actually make for significant time savings. That’s a 50% improvement in efficiency. That’s an area where we see the computer vision model making a lot of progress,” Ms Mahajan said. DC



