Morten Andreas Strøm / Ben Skal September 12, 2022
What makes Cognite unique? Why is partnering with Cognite the best investment of your time and resources?
This is a 3 part series where
-
What is Cognite Data Fusion and why did we build it? (First post)
-
Data modeling grounded in business impact (Previous post)
-
The opportunity cost of custom building your industrial data platform (This post)
In the first Why Cognite post, we discussed the data problem Cognite Data Fusion is built to address. The short answer, industrial companies need simple access to complex industrial data. The reason, most operations teams have many business opportunities, but are struggling to effectively use data to improve production.
In the pursuit of unlocking the value of industrial data, many companies have started the endeavor of building their own industrial data platform from generic cloud services. However, as stated by Gartner:
Unless a software engineering leader has previously established experience for building highly scalable, cloud-based systems, building a digital business technology platform may be the most challenging development effort they have ever undertaken.
It requires new technologies and architectures, which in turn require new skills to apply. This isn’t just another software development project.
Prior to partnering with Cognite, most of our customers had already taken initial steps to making data available to support their digital roadmap. During our early discussions, the most common objection we often hear is:
“We have already invested in a digital platform, how does this align with using Cognite Data Fusion?”
To properly address this question, which is more than valid, I think about this in two pieces:
-
Why not build my own industrial data platform with publicly available services?
-
How can I realize value faster with Cognite Data Fusion?
Before answering these questions, we should set some boundaries around the scope of this discussion. At Cognite, we are focused on unlocking use cases tailored to operations and production - think optimizing processes around energy, production, and emissions, minimizing quality deviations, reducing asset failures, connected/remote worker, etc. These are solution areas that are dealing with complex industrial data that is often siloed, difficult to understand with many naming conventions across systems, and coming from generations of equipment.
Why not build my own industrial data platform with publicly available services?
The simplest answer to this question is centered around the opportunity cost of time. Most existing investments have been made into data lakes and data warehouses. Vast amounts of data are already existing here, and while a few use cases may have been developed, very few industrial customers have been able to unlock operational use cases at scale.
The reason publicly available services have been difficult to apply to operations is they lack the industrial context needed for most solutions. This is best grounded in an example. Let’s say we wanted to optimize our maintenance and planning for a turnaround. Your maintenance and operations team will need to know:
-
Current operating conditions and health of your assets (from a model that compares design to real-time performance)
-
Last time maintenance was performed on each asset (from your CMMS system)
-
What processes these assets contribute towards (from your P&IDs or engineering document)
-
Events that have been generated by your assets (alarms)
-
Standard operating procedures for how to perform maintenance / inspection (from a document system)
A fully contextualized view of all data required to optimize maintenance
If all of this data already exists within your data lake, your teams have the demanding task of bringing all of this information together so it can be viewed through a single pane of glass and used to develop data-driven recommendations.
Using generic services, connecting these many data sources in a way that can be used by your teams to make decisions in the field may take 6+ months to complete. Time that should be spent working with your teams to develop data-driven insights is instead dedicated to building the underlying infrastructure to support these solutions.
As Gartner stated above, building your own technology platform is a significant undertaking that can pull many resources away from delivering business impact from operations as your teams have to infuse the domain expertise from your organization into your data platform. At Cognite, we provide a cloud-based industrial data foundation that enables your organization to spend less time wrangling data and more time building, operationalizing, and scaling solutions for your teams in the field.
How can I realize value faster with Cognite Data Fusion?
The core value proposition of Cognite Data Fusion is to automatically infuse domain expertise into your industrial data platform so your teams can focus on delivering business impact. We do this by:
-
Providing a data operations platform built for industrial operations - This extends solutions beyond time series data (Historian, MES, IoT platform, etc.) and also includes insights from IT (ERP, CMMS, QMS, etc.) and engineering data (P&IDs, Images, 3D models, engineering docs, et) into solutions.
-
Secure short time-to-value - With fully open APIs and support for applications and tools your teams already use (Grafana, PowerBI, Jupyter notebooks, etc.), your teams can focus on solution delivery instead of data management.
-
Enable rapid enterprise-wide scaling of solutions - Cognite Data Fusion scales with our platform, not people, so data can be reused across many solutions and scaling, for example, an asset performance solution can be done in hours instead of weeks or months.
Cognite Data Fusion is an open solution where data can be reused to support multiple domains
As illustrated in the high-level solution architecture below, Cognite Data Fusion’s open APIs apply to both ingress and egress of industrial data and enables you to leverage investments you have already made along your digital journey.
High-level solution architecture showcasing the flexibility of Cognite Data Fusion
Hit the ground running. Start developing solutions from day 1.
The capabilities of Cognite Data Fusion can, of course, be developed from scratch - we did it ourselves, leveraging Azure Cloud Services and open source technology.
However, industrial companies often require 30-40 people (with a competency that is hard to find) to develop and maintain a minimum version of such a platform. These self-developed platforms are often limited to OT data, and cannot drive expanded value in, e.g., the direction of engineering and operation collaboration. As it takes years to build, it often takes months before any initial value is harvested, and the technical teams are spending a majority of their time working on infrastructure instead of putting energy into solving business problems. With Cognite Data Fusion, you can instead focus on developing impactful solutions from day 1. Value realization starts in days and teams start realizing business impact in 8-12 weeks.
Fundamental differences in data modeling technology
While many in the market use a relational database approach with a strong schema, Cognite Data Fusion provides a labeled property graph operated in a schemaless way.
The flexible modeling capabilities of Cognite Data Fusion enable building solutions across a wide range of solutions, and Cognite embraces that models need to be iterated and updated over time. This topic is addressed in more detail in part 2/3 of this series. Because schema updates and other changes are often needed across sites, production lines, and use cases, data governance can also be a challenge - when trying to keep up with the ever-changing business needs.
Scaling an ecosystem of solutions
In essence, there is no single monolithic solution that alone can solve all use cases, it is rather a combination of 10s of use cases that are scaled across 100s of production facilities. These use cases span, for instance, equipment monitoring, quality optimization, energy efficiency, predictive maintenance, and much more. In order to rapidly develop, operationalize and scale solutions, old-fashioned ways of working with data are not suitable.
Each data-driven solution needs a way of packaging needed data (data products) as the schema (aka. data model) and needed data vary from solution to solution. In this, you need:
-
Efficient tools to create the initial version of the data product
-
The ability to change it over time (as the industrial reality is ever-changing)
-
The ability to do all of this at scale (the digital roadmap extends beyond one solution at one plant, but instead for 10s of solutions across 100s of plants).
To summarize this 3 part series, instead of building a data platform with industrial domain expertise, Cognite Data Fusion gives you the capability to provide simple access to all OT, IT, and ET data across your operations. With fully open APIs, your teams can focus on solving industrial use cases with tools and applications your team already knows.
Cognite Data Fusion gives you full flexibility to capture your desired data ontology (Domain Data Models) and enables you to provide solution-specific data models. Cognite Data Fusion can efficiently populate solution-specific data models with data which is a crucial capability to efficiently develop and scale 10s of use cases across 100s of plants. Our industrial DataOps capabilities also secure federated data governance, where the responsibility of securing consumable data products is spread across the organization to those with the right competency and domain insights.