Operational Digital Twin: How data contextualization provides a complete, actionable understanding of industrial operations

  • 24 August 2022
  • 0 replies
  • 177 views

Userlevel 2

INTRODUCTION 

A digital twin can be one of the most useful, insightful tools to drive industrial innovation. While the digital twin concept is no longer new, the capacity of the term continues to expand based on technological advancement, particularly in the realm of the Industrial Internet of Things (IIoT). Over time, digital twins have morphed to meet the practical needs of users. In oil and gas, for example, the possibilities of condition-based monitoring and predictive maintenance have amplified the need for a digital representation of both the past and present condition of an object or system.

 

Gartner predicts that “by 2023, 33% of owner-operators of homogeneous composite assets will create their own digital twins, up from less than 5% in 2018” while “at least 50% of OEMs’ mass-produced industrial and commercial assets will directly integrate supplier product sensor data into their own composite digital twins, up from less than 10% today.” In the same report, Gartner indicates that digitalization will motivate industrial companies to integrate and even embed their digital twins with one another to increase their own competitiveness. To do that, industrial companies need to make some sound strategic decisions now to lay a firm but flexible foundation for digital success.

It is possible for an organization to enhance the overall understanding of its operations by putting all OT and IT data through a contextualization pipeline to create an operational digital twin. This next frontier in the digital twin space uses scalable cloud architecture to enable a crucial decoupling of individual models (e.g., applications, simulation models, and analytics) from separate source systems, reversing the unnecessary complexity established in point-to-point integrations.

Oil and gas companies that deploy an operational digital twin will finally have true control over their data - the ability to understand where it comes from, how reliable it is, and how to enrich it over time. They will also be the first ones to scale successful solutions on top of that data, which must be the priority of any digitalization initiative. 

This article outlines the prerequisites and benefits of constructing an operational digital twin using examples already at work in the field.

There should be no limit to the amount of data or the types of data included in this operational digital twin. Every data type, from time series to piping and instrumentation diagrams (P&IDs) to 3D models to maintenance logs to weather data, adds context that brings the operational digital twin closer to representing the true industrial reality of the asset(s). This vital foundation gives data owners a complete, useful overview and allows authorized users, whether internal or external, to streamline the creation of models for individual components, equipment, and processes because all the relevant data already exists in a single, accessible virtual space.

The operational digital twin allows for data consumption based on the use case. Any model the user creates can live off the streaming live data that exists there, enriching the space by feeding its own insights or derived information (e.g. synthetic temperature or flow information created by a simulator for equipment where no real sensor exists) back into the twin. Combined with live and historical data, these insights on equipment behavior shore up the operational digital twin, making it even more complete and useful for the future. 

 

INTRODUCING THE OPERATIONAL DIGITAL TWIN

An operational digital twin is the aggregation of all possible data types and data sets, both historical and real-time, directly or indirectly related to a given physical asset or set of assets in a single, unified location. The collected data must be clean and contextualized, linked in a way that mirrors how things are or would be linked in the real world, and made consumable depending on the use case.

 

BUILDING AN OPERATIONAL DIGITAL TWIN

Industrial companies have so far invested in aggregating their data and making it available to their personnel, usually by using a cloud data warehousing set-up. To build an operational digital twin, this collected data must be put through a contextualization pipeline, a process that is both automatic and manual.

A strong operational digital twin requires:
• Multiple data sets & data types (unlimited)
• Multiple relationships between data (unlimited)
• Underlying principles of data vitality, data openness, and data accessibility

These requirements are inspired directly by the needs of the human users of the technology, who care about different kinds of data and need different ways to navigate and view it.

 

AKER BP’S OPERATIONAL DIGITAL WTIN: A CASE STUDY

Cognite’s work with Aker BP, one of Europe’s largest independent oil and gas companies, showcases the potential of an operational digital twin. Since Cognite became Aker BP’s main technology partner in 2017, the two companies have worked together to decrease the cost needed to solve digitalization use cases and increase the scalability of solutions. 

Aker BP’s tenant in Cognite Data Fusion now contains more than 260,000 time series, 1.5 trillion data points, 700,000 documents, and 30 million events. The data used to be locked in disparate silos. Now it’s available whenever and wherever workers need it, powering solutions that are helping Aker BP cut costs, optimize production, revamp maintenance routines, reduce emissions, and increase worker safety.

 

MULTIPLE DATA SETS & DATA TYPES

With openness as a default, both in terms of data sharing and information exchange across their organization, Aker BP was able to expand its digitalization initiatives quickly and efficiently. First, they deployed Cognite Data Fusion across all five of their operational assets and prepared to ingest data, one block at a time, in a strategic order.

To choose that order, they collected hundreds of use cases across a wide variety of domains. With the help of domain experts, they categorized the cases and prioritized them based on a number of criteria, including how quickly a case could likely be solved and how proportionately high the return on investment might be. Top priorities for Aker BP fell into the following categories: Smart Maintenance, Product Optimization, Digital Worker, HSE, and Drilling & Wells.

Beginning with Smart Maintenance cases, Aker BP identified sensor data (time series) and maintenance logs as the data types to liberate first. The developers who deployed CDF used targeted ingestion APIs to extract the sensor and maintenance data from their separate source systems and duplicate it in the cloud. 

To meet the needs of their Digital Workers operating offshore, Aker BP added 3D models and maintenance logs. Each set of use cases helped to identify new layers of data to liberate and add to CDF.

 

MULTIPLE RELATIONSHIPS BETWEEN DATA 

Liberating the data and contextualizing it is the first step. Usefulness then requires that the data is findable when a user needs it. Different users will require different functionality from the operational digital twin. This is about navigating and viewing the data.

 

EXAMPLE USE CASE
A mobile worker filling a tank on an oil platform may have previously been required to radio the control room to gauge the level of oil in the tank during the process. Now they can use a handheld device to watch a digital twin of the tank fill in real-time, provided they can depend on the streaming data coming through with very low latency.

 

Components and equipment often share more than one link or relationship in the real world. A pump has a geographical link with the pipe attached to it, as well as with any other equipment in the immediate area. The same pump has an upstream and downstream flow link with different valves and pipes. And finally, the pump has a logical link with all other pumps on the oil platform, regardless of their respective physical or flow locations. An operational digital twin can handle a variety of possible structures in order to meet the needs of its users.

Support Engineer
Needs to service a particular equipment package on an oil platform. They only care about this package. When they approach the operational digital twin, a support engineer wants to be able to navigate by system, major equipment, and the individual components they deal with every day. An asset hierarchy is a logical way to frame the data in this case.

Production Engineer
Cares primarily about production optimization. They want to know about the flow of oil. Understanding equipment types is part of that equation, but only in terms of oil flow. The production engineer doesn’t need to see all the valves on the oil platform, nor do they need a 3D model to help them find the individual valve whose sensor data is running in their model. Rather, they want to be able to combine layers of sensor data and event data to run anomaly detection models and/or set alerts related to temperature and pressure, for example. In this case, a graph database structure would be more useful.

Electrical Engineer
Wants to know whether critical equipment is receiving enough stable power. The layout of the system is important to them, but only as it pertains to the flow of electricity, not oil.

 

ALIVE, OPEN & ACCESSIBLE

The operational digital twin must be dynamic, flexible enough to meet the needs of a growing variety of users and models. This goes beyond populating the twin with a truly comprehensive set of contextualized operational data. A successful operational digital twin will also be constructed on principles of data vitality, data openness, and data accessibility. In other words, no matter how deep and wide it becomes, the twin must remain alive, open, and accessible.

Alive
As live data becomes an expectation, the question of latency will be a key differentiator for the best operational digital twins. The lower the latency, the more closely the digital twin reflects the industrial reality. Milliseconds can be the difference between theory and practical use of an application in the field.

Open
There are many ways of creating value on top of the operational digital twin. Models and applications for internal use are the main drivers, of course, but there are numerous opportunities for value creation in opening the operational digital twin up for connection with other external, proprietary digital twins.

While concerns regarding intellectual property are reasonable in this increasingly open digital age, the potential rewards for finding a flexible solution to issues of digital twin integration should outweigh the instinct to remain shut off from the rest of the value chain.

Having true control over one’s operational data includes the capability to share data when and with whom it makes the most business sense. To reap the benefit of the openness described above, certain best practices are required to make the data accessible to the people authorized to use it, both internally and externally.

Accessible
The quality of any data model is contingent upon the quality of the data it runs on. The more eyes you have on your data, the better the quality control. It’s important for internal people to be able to use the API easily, without unnecessary friction (i.e., new account registrations or extra passwords). Offshore workers and other domain experts will spot errors that data scientists won’t see. 

Aker BP opted in August 2018 to open its operational digital twin to integration with a digital twin from Framo, a global provider of pumping systems. The goal of the digital pilot was to enable a new business model: the first performance-based contract between an operator and an OEM on the Norwegian Continental Shelf. The historical model was a typical transaction. Aker BP purchased Framo’s pumps outright and handled operation and standard maintenance, while Framo provided calendar-based maintenance and was on-call for advice and special service if something went wrong. With the performance-based contract, Framo obtained continuous access to live pump data stored in Cognite Data Fusion. This shift allowed Aker BP and Framo to align their priorities. Aker BP wanted their pumps up and running as long and as efficiently as possible, avoiding costly unforeseen breakdowns and assuring that maintenance was always necessary, driven by data, rather than by the calendar. Framo also wanted the pumping system to run optimally, since the new contract meant they were effectively selling uptime. The new model was a success, reducing unnecessary maintenance by 30% and shutdowns by 70% while increasing pump availability by 40%. In March 2020, Aker BP and Framo announced that the two companies had agreed to a long-term, fullfledged smart contract, a milestone in the modernization of the Norwegian Continental Shelf.

Accessibility is also essential for potentially powerful external partnerships. To integrate with proprietary data models from an OEM (as in the Aker BP-Framo case described above), the owner of the operational digital twin should use open APIs for consumption on top of the industrial data platform. This means sending private, secure API keys to authorized users, but also providing full, open documentation on the APIs themselves. Taking a proactive approach to data accessibility can also have other reverberating benefits. One of the biggest obstacles to data-driven innovation is lack of access to high-quality data. An operational digital twin is, by definition, a comprehensive set of cultivated data offering as close a digital representation of industrial reality as possible. Much of the data in the operational digital twin will be proprietary and necessarily private. But some of that data will be benign, and by opening that data for access to curious third parties, it’s possible to stimulate unplanned innovation that may well impact your industry and the ecosystem around it. Aker BP’s digitalization work is not a one-time project. It is an ongoing effort to create an organization that can more efficiently respond to a changing market. Every solution that draws on data in Aker BP’s operational digital twin drives the company toward becoming an organization where data is available and actionable for humans and machines, where algorithms eliminate the need for time-consuming busywork, and where humans are free to channel their creativity toward solving in-depth, analytical tasks.

In the summer of 2018, Aker BP launched the Open Industrial Data Project, providing the largest, live industrial data set in the world to any interested party. The data originates from a single compressor on the Valhall oil platform in the North Sea. The stream is organized and made available via Aker BP’s operational digital twin through the Open Industrial Data website free of charge. Registered users receive an API key that provides access to the live data set, as well as an application to navigate and visualize the contextualized compressor data, and user-friendly toolkits to help entry-level users get started with the project. The hope is that sharing this live data will accelerate innovation in predictive maintenance, condition monitoring, and other applications, directly benefiting Aker BP’s operations, but also improving the health and outlook of the industrial ecosystem on the Norwegian Continental Shelf.

 

CONCLUSION

Historically, a digital twin has had a single dimension of contextualization to solve the use case it was specifically created to answer. When one creates an operational digital twin, the richness of the data describing the industrial reality allows the user to create many more correlations between data points. Data needs to be combined - contextualized - to solve problems. An operational digital twin is one tool that oil and gas companies can use to grasp a greater level of understanding and control regarding their data and their operations, discovering insights to optimize operations, increase uptime, and revolutionize business models. This is why contextualization is crucial to the creation of the operational digital twin, making it flexible enough and scalable enough to handle the complexities of current operations and to anticipate the demands of data-driven operations in the digitalized industrial future.

 

I’d love to hear from you in the thread below with comments or questions! 

 

 


0 replies

Be the first to reply!

Reply