2020Drilling Rigs & AutomationSafety and ESGSeptember/October

Digitalization efforts necessitate evolution in established process control network security model

Adopting an extension to the Purdue reference model could help organizations to ensure security as they adopt cloud services to drive intelligent automation

By Tommy Evensen, MHWirth

Requirements to be always-on have increasingly posed a threat to process control systems, which have historically been physically separated from corporate networks. While process control networks have traditionally implemented network segmentation using the Purdue Reference Model (Figure 1) – which basically means segmenting the network in areas of criticality – this article proposes an extension to the Purdue model. The goal is to increase operational flexibility while maintaining security.

It is well known that, from a security standpoint, it is not recommended to have data flowing into a process control network due to the possibility for a direct impact on process control networks and operations. Outgoing traffic can be allowed as long as it is unidirectional and strictly controlled.

However, new cloud-based services are promising event-driven optimization of drilling processes, utilizing sensor data readily available on drilling assets. What happens when these services get “smart” enough and require direct control of devices for optimizing its processes? Is the industry ready for such a change, and how do we ensure security at the same time?

Gartner defines digitalization as the use of digital technologies to change a business model and provide new revenue- and value-producing opportunities; it is the process of moving to a digital business. This can be summarized more simply as automating a manual process using a digital system to increase the efficiency and safety of a process. The automation industry has been doing this for at least a decade, with the goal of increasing efficiency, minimizing errors and reducing the risk of human error.

In recent years, terms like industrial internet of things (IIoT) and Industry 4.0 have become buzzwords, promising more automation by using interconnected devices and technologies such as 5G, cloud, machine learning and artificial intelligence. To the operational technology (OT) community, this can be a bit scary, as systems have historically been “air-gapped” so they were unreachable from the outside world. The new “always-interconnected” future will challenge the way industrial control networks are secured. At the same time, more security measures will need to be added to protect the processes so that they can be managed safely and efficiently.

Historically, automation has used closed-loop systems running on legacy and unsecure fieldbus protocols. This, combined with specialized and unknown process control hardware, made it hard for potential threat actors to infiltrate the system.

Today, there is growing use of commercial off-the-shelf hardware, such as mainstream Windows computers and ethernet-based networks. Hence, potential threat actors now have a far simpler way to breach a process control system using standard tools readily available on the internet.

The oil and gas industry’s industrial control systems (ICS) are indeed vulnerable to simple attack vectors – one example is Norsk Hydro’s cyber attack in 2019, where forensics showed that process networks were not properly segmented from the enterprise and internet-connected networks.

Many of these legacy process control systems were running on unmanaged networks and computers and with zero visibility – not knowing what and who is on the system. Due to the “legacy” nature of these networks, a model to “zone” the network into different segments was created. Multiple models exist today, with the Purdue Reference Model being the most common and well known. The model was adopted from the Purdue Enterprise Reference Architecture model by ISA-95 and is now used as a concept model for ICS network segmentation.

The Purdue model divides the ICS architecture into three zones and six levels. Starting from the top, these are:

Enterprise

Level 5: Enterprise network

Level 4: Site business and logistics

Industrial Demilitarized Zone (DMZ, often called 3.5)

Zell/Area/Industrial Zone(s)

Level 3: Site operations

Level 2: Supervisory control

Level 1: Basic control

Level 0: Process/field

If implemented properly, this model will yield a fairly segmented network architecture, where process data should flow up and down via each level to increase security.

Why is this important? If a computer worm infects a system, it will automatically try to replicate itself to neighboring systems using known weaknesses. By locking down access and segmenting systems, such as active directory, file servers, historians and HMI systems, in different networks, the worm can be prevented from spreading by only allowing certain traffic between them. This allows a system to be quickly restored without disrupting the ongoing process at lower levels, as access between zones/levels should only go from higher to lower levels without “jumping” over a level.

The Purdue Reference Architecture Model is a commonly used concept model for network segmentation of industrial control systems.

Another benefit of following the Purdue model is that, from top down, a whole level can be lost without losing operational integrity. This means that if the DMZ is lost, the link to the outside/enterprise world is lost, but all systems are still intact. Losing level 3 means losing access to the operational support system, such as active directory, SQL database server and file servers. This sounds bad, but cached log-ons and local files are retained, and HMI operations can still be used to control the process. Losing Level 2 translates into a “loss of view” situation where operators can’t see on the HMI panels what the status of a process is, but they can still operate and safely shut down the process using visual reference. 

This figure shows where Edge computing might fit in the traditional Purdue model.

One example is an engineering workstation placed in the wrong location in an industrial network. This is a “last resort tool” that needs to be available if a recovery or urgent maintenance task is required. Placing this system in a DMZ yields a high risk as this is a typical “crown jewel” that can be utilized as a pivot point for further lateral movement into a control system. This is mainly due to the limitations of access restrictions available on these systems. These systems are normally available from the outside world as a tool used by system integrators and support/engineering roles. The author believes that any system hosted in an industrial DMZ should not have the ability to have a direct impact on any control system.

Digitalization and its promise of increased efficiency require that data generated on level 0/1 be processed and used to make intelligent decisions. To get the data, new and “smarter” sensors, or a modern cloud-connected gateway module, may be needed. Those devices are often Ethernet based. Data transport and storage are also required. Examples are the Historian Azure service or a vibration monitoring system hosted as a software as a service in a cloud. Field studies have shown that these sensors are often configured to send data directly to these cloud services without regarding the Purdue model or the increased risk involved by opening up this dataflow.

Automatic intelligent decisions will also require that data can flow back to the system. This may be in the form of equipment commands, i.e., to lower torque or pressure due to something identified in the data processed in the cloud service. It’s been seen in the field where command data is written directly to a process controller from a cloud service. This has the potential to significantly increase the attack surface and risk. Not only is the asset itself in jeopardy, but the cloud service is also exposed.

If one of these services is compromised by a threat actor, the result can be loss of direct control over all process control networks connected to this cloud service.

How can the industry meet the requirements of these “always connected” ICS while keeping them secure? One way is to use strict security control of the cloud services, transport networks and the control system itself. This can be a daunting task, as corporate cybersecurity organizations typically have limited resources.

Another option might be Edge computing, which is the approach that MHWirth is taking with its drilling products. Keeping the intelligence and decision making inside the asset’s control system provides a guarantee that the required computer resources are available and that a link loss to cloud services can be sustained without losing the efficiency of these “smart” systems. Data can still be sent outward to a cloud service for analytics, but data should not flow back into the process network. These Edge devices should also be highly secured using, for example, unidirectional gateways for outgoing traffic.

Hence, the control is kept “local” and a cloud outage can happen without losing efficiency due to the fact that data- and intelligence-driven decisions are kept as close to the process as possible. So, where are these Edge computing devices placed in the Purdue model? One suggestion is to place them on the lower levels 0-3 but in their own zone, while still doing proper segregation of traffic going in and out to these Edge devices. Outwards traffic from this “side zone” can still be allowed to any enterprise or cloud zone, if required.

Figure 2 shows where Edge computing might fit within the standard Purdue model.

With this change in the Purdue architecture model, its foundation is kept intact. This way, its architecture will not become obscured. Existing brownfield sensors are also maintained and their data utilized. Further, the possibility is maintained of “losing” a whole control level, such as the DMZ, without losing complete control of a given process, while still getting the benefits of services available in the cloud.

This option is not conclusive and only shows, by example, what one company has done. There will continue to be discussions of different ways to secure new and existing control systems without compromising security, while still allowing organizations to harvest data and utilize cloud services and its promises to increase efficiency and reliability of process control systems. DC

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button