Explore user resources including how-to guides and video tutorials
This How-to article describes and provides code that can be used as an example template for adding an automated P&ID annotation process that reads from and used data modeling objects in your Cognite Data Fusion project.The process uses CDF Workflows to schedule and run two functions that complete the automatic process of annotating P&ID documents.The result from running the workflow would be populated annotations in the data modeling, linking the P&ID files to Assets and to other related P&ID files in the data model, as illustrated below: The annotations are then visualized of the diagrams as purple boxes linking to Assets and as orange boxes linking to files, as illustrated below: How To use the provided code example The provided code example is structured as a CDF Toolkit module. This means that you can set up your project to utilize the CDF Toolkit to govern your CDF project (as described here: https://docs.cognite.com/cdf/deploy/cdf_toolkit/ ) The content of this
Hello,I want to make code changes to open source cognite repository - https://github.com/cognitedata/cdf-sdk-java.Can you guide me on the process to do the same ? How I can open the pull request on this repo ?Regards,Neeraj B
Hi everyone,I would like to share with you how to build faster your RCAs with Industrial Canvas. We have seen many of our users spend more time dealing with tools and data instead of focusing on the cause and analyzing the issues. We can facilitate this process and speed up your time to value. Now, in Industrial Canvas, you can easily create cause maps in the same workspace where you have all the data you need as evidence for approving/rejecting a hypothesis. The built-in cause map comes with a few essential features: Easily create a tree with cards. Option to add more connected cards by clicking on the “+” left, right, above, and below. The boxes are automatically connected. Arrow pointing right to left. Auto alignment: The user does not need to take actions for alignment. If you move the root card, the entire tree also moves. Boxes can be connected with other data in the canvas by using connection lines. Multiple trees can be added to the same canvas and connected. Apply statuses o
Hello,I’m Pierre, data engineer at Cognite. I’ll be posting, on a regular basis, some useful tips about Cognite Data Fusion and its use. If you have any questions or remarks (technical or not), don’t hesitate to reach out to me or to create a new topic in Cognite Hub. Some of the features of the Cognite API are not available in the Python SDK. You can still use those features by making a call to the desired endpoint. The SDK facilitates this process thanks to the get, post, put and delete methods of the Cognite client (see: https://cognite-docs.readthedocs-hosted.com/projects/cognite-sdk-python/en/latest/cognite.html#cogniteclient). Once your client is set up, all the auth process is taken care of by itself, it is very straight forward. Example:payload = {...}endpoint_url = “/api/v1/projects/my-project/context/diagram/detect/”response = client.post(endpoint_url, payload) When a payload is needed, you can refer to the documentation to get insights on its structure. (Cognite API documen
OverviewWhen extracting data from an SAP system using the OData API extraction process, you may encounter situations where not all available data is being retrieved. This issue can be resolved by utilizing server-side or client-side pagination, depending on the capabilities of your SAP endpoint.Solution1. Server-Side PaginationFrom SAP Extractor version 2.2.0 onward, server-side pagination is supported for full data loads using the OData $skiptoken parameter. This approach allows the extraction process to handle large data sets more efficiently by managing pagination on the server side.When to Use:Your SAP OData endpoint supports the $skiptoken parameter.Sample Configuration:# full load - server pagination - source_name: s4h_dev name: equipment destination: type: raw database: test_sap_extractor table: equipment_full_server sap_service: API_EQUIPMENT sap_entity: Equipment sap_key: [Equipment] pagination_type: server schedule: type: interval expression:
Your expertise can make a difference! By sharing your best practices and insights on Cognite Data Fusion, you position yourself as a leader in the community. Your contributions bring new perspectives, address gaps with unique use cases, and collectively grow our knowledge center. To share your best practices, simply click the + Create Topic button. Provide a brief description of your topic and its value, and include specific insights or tips—whether through a video, article, infographic, or interactive content. Keep it concise and focused on a single topic. When sharing, remember to select the How-to Guides category and add relevant tags for better categorization. All approved contributions will be marked as "Community Contributed" to acknowledge your valuable input. If your content aligns well with our educational goals and quality standards, we'll add and publish it in our Microlearning Library on Cognite Academy. Contributors of published modules will be awarded points on the Cogn
You can load data from Cognite Data Fusion (CDF) into Microsoft Power Apps, where you can build custom apps for your business needs.This article explains how to create and connect a custom Power Apps connector with the Cognite API and build a canvas app.Before you startTo perform the steps below, you need to be an administrator of Azure AD. Make sure you have registered the Cognite API and the CDF portal application in Azure AD and set up Azure AD and CDF groups to control access to CDF data.You'll also need to register a custom web app in Azure AD to give Power Apps access to the CDF data. Make sure to register the redirect URI, https://global.consent.azure-apim.net/redirect under Authentication > Add a platform > Web.Step 1: Create and authenticate a custom connector Sign in to Microsoft Power Apps with your Microsoft account. Start from Dataverse > Custom Connectors. Select New custom connector > Create from blank. On the General information tab: Host: Enter http
1.0 Introduction 2.0 Steps 2.1 Create a System DSN Entry for the MS SQL Server Database Used as the Source for the Extractors. 2.2 Installing the First Instance of the Windows Service 2.3 Installing the Second Instance of the Windows Service. 2.4 Start the Services 1.0 IntroductionThe purpose of this document is to show how to install multiple instances of the Cognite DB Extractor as a Windows service, where each instance is using a different config.yml file. 2.0 Steps2.1 Create a System DSN Entry for the MS SQL Server Database Used as the Source for the Extractors. Run the “ODBC Data Sources (32-bit)” program as an administrator. Click on the “System DSN” tab and click on the “Add” button to create a new SQL Server DSN. Use the wizard to name and configure the new System DSN entry. When finished with the configuration, click on the “Test Data Source” button to confirm the entry is working; the “Test Results” popup window shown to the right should open to confirm the test was
IntroductionCo-author: @Jan Inge Bergseth Cognite Functions provide a run-time environment for hosting and running Python code, similar to Azure Functions, Google Cloud Functions, or Amazon Lambda. One of the benefits of utilizing the built in Functions capability of CDF is that it is tightly coupled with CDF and gives you, as a developer, an implicit Cognite client allowing you to seamlessly interact with your CDF data and data pipeline. CDF Extraction Pipelines allow you to monitor the data flow and gain visibility into the run history and receive notifications from data integration events that need to be paid attention to, all of which is an important part of the resulting data quality. CDF Extraction Pipelines offer a great feature for storing extractor configuration settings, allowing you to remotely configure the extractor. This can be extremely helpful as it avoids having to locally manage extractor configuration. This article will explore the combination of Extraction Pipelines
Purpose:Test v3 of the db-extractor as easily as possible with as little local set-up as needed. Outline:This example will use the Docker version of the db-extractor, and also use Docker to make a temporary PostgreSQL database. If you use another version of the extractor (like a Windows service) and/or want to use a database already installed on your local computer, it should still be fairly simple to use this example and make your own changes where applicable. Pre-requisites:Docker (can be avoided if you run a non-containerized extractor and have access to a database) A CDF-project with IdP-authentication Tenant Client Secret Capabilities connected to the client used in the extractor: Raw: List, Read, Write Extraction Pipelines Runs: Write One Extraction pipeline Steps:Create a containerized PostgreSQL database Open a new terminal Get docker image for the postgreSQL database (only needed once). Run: docker pull postgres Create and run Postgres Container (only neede
In the below GraphQL schema actors [Person] edge properties (role, compensation) are captured in ActedIn type. How to get edge properties of the Actor stored in separate type using GraphQL query. type Movie {title: Stringactors: [Person]@relation(edgeSource: "ActedIn"type: { space: "imdb", externalId: "HAS_ACTOR" }direction: OUTWARDS)director: Person}type Person {name: String!}type ActedIn @edge {role: Stringcompensation: Floatmetadata: JSONObjectlines: [String]}
MQTT is one of the standards for IoT communication in the industry. It is a lightweight, publish-subscribe, machine to machine network protocol for message queue/message queuing service. As it is widely used, it made sense to have a dedicated extractor for that protocol. At Cognite, we have a hosted extractor for MQTT, that facilitates the extraction of data from a broker. “Hosted” means it runs on our infrastructure and you don’t have to deploy anything on yours to extract data: you just have to configure it. In this article, we will see how to configure and use it. An example scenario involving MQTT could be sensors installed on industrial machines publishing timeseries data like temperature, pression etc., to a broker. Data would be published to different topics. When extracting data from the broker, with the Cognite extractor, we can have different behaviors based on the topic. Here, we will publish data to four different topics, on an MQTT broker. Those topics are named as follow
The steps in this post assume that you have access to an instance of Retool and Cognite Data Fusion. It also assumes basic knowledge of Cognite resource types like Assets and Events.A thing that customers of Cognite have realized is that it’s incredibly hard to predict today, which business processes and use cases you need to solve in the future, or even tomorrow (literally). Thus, one of the philosophies behind Cognite Data Fusion, is to enable a solid and flexible data foundation that makes building solutions and processes for new problems faster, and more scalable.One way of speeding up the building of new apps and workflows, is to make data readily available in open, well defined and performant APIs. This lets builders use a wide range of tools, and select the best tool for the task at hand.Here we’ll show how to build a simple mobile-friendly application with Retool and Cognite Data Fusion, to manually log production downtime events, and attach them to an Asset in Cognite Data Fus
In Fusion Data Modeling(FDM) when you try to use aggregate queries currently only supports filtering on a View's own properties. Currently, users cannot filter on related types in aggregate queries. See the below example: In Fusion Data Modeling(FDM) when you try to use aggregate queries currently only supports filtering on a View's own properties. Currently, users cannot filter on related types in aggregate queries. See the below example: If you refer to the below images you can see that the Actor is related to the Movie. You can see that the relation data is being populated in the Data Management as well.However, when you try to filter out any of the related types in aggregate queries you will get an error message that the Field is not defined. The reason is that the aggregate endpoint is serviced by a different backend(elastic search) compared to querying data in FDM and this service cannot/does not normalize any relationships and only supports aggregations of data in a single cont
Currently, CDF Workflows have no built-in scheduling mechanisms (it is on our roadmap). A simple way to implement schedule-based execution of workflow is to leverage Cognite Functions. Below is a two-step example to explain how it works.1. Create the Cognite Function that will act as the workflow triggerNote that you need to specify the client_credentials parameter inside the call to client.workflows.executions.trigger for the authentication to work at runtime.# Enter credentials and instantiate clientfrom cognite.client import CogniteClientcdf_cluster = "" # "api", "greenfield", etc.cdf_project = "" # CDF project name tenant_id = "" # IdP tenant IDclient_id = "" # IdP client ID client_secret = "" # IdP client secretclient = CogniteClient.default_oauth_client_credentials( cdf_project, cdf_cluster, tenant_id, client_id, client_secret)# Define Function handledef handle(client, data, secrets): from cognite.client.data_classes import ClientCredentials execution = client.workflows.
Docker is one of the most common ways to deploy an extractor because of its ease of use. Our off-the-shelf extractors have a Docker image you can download and run. But when it comes to custom extractors, you have to build the Docker image yourself (if you intend to run it with Docker, of course). In this article, we’ll see how to do that. We’ll use the example of the REST extractor, from a previous article (https://hub.cognite.com/developer-and-user-community-134/rest-extractor-1232). It is a custom extractor, made with our extractor-utils package, hence it does not have a pre-built Docker image.The content of this article will be specific to the REST extractors made with the extractor-utils package, but is still applicable to other kinds of extractors. You simply have to make sure to adapt environment variables, arguments etc. to what your extractor needs. First, make sure that you have Docker installed, up and running. Open the python project with the extractor in your code editor. T
IntroductionIn this tutorial, we'll explore one method to stream data from Cognite Data Fusion (CDF) to Power BI. While Cognite's Power BI connector serves its purpose, it has limitations, especially in terms of live data streaming and handling complex queries. Here I am presenting an alternative approach using "push" or "streaming" datasets in Power BI, bypassing the limitations of Cognite's Power BI connector by utilizing a scheduled job to fetch data from the CDF REST API and push it to the Power BI REST API.1) Understanding the Limitations of Import mode ConnectorsCognite's Power BI connector it's based on OData (Open Data Protocol) and uses Power BI's Import mode. Import mode is the most common mode used to develop semantic models. This mode delivers fast performance to end users thanks to in-memory querying. It also offers design flexibility to modelers, and support for specific Power BI service features (Q&A, Quick Insights, etc.). Because of these strengths, it's the defa
Requirements Access to your CDF project. Access to your applications and services. A new Azure Active Directory tenant. Follow the steps carefully to plan and prepare for a successful migration:Step 1: Collect project data Step 2: Configure new IdP identity and access management Step 2.1: Groups and users Step 2.2: Register applications and service accounts Step 2.3: Register the new IdP for the existing domain. Step 3: Access Management Step 4: Enable new OIDC config for the project Step 5: Update CDF transformations, Functions, Extractors, Grafana, Custom Apps, etc. Step 1: Collect project dataCollect available data for all the items that you need to migrate: Applications: Make a list of Cognite applications (i.e.: Remote, Infield, …) used by your CDF project. Make a list of third-party applications (i.e.: Grafana, Power BI, Azure Data Factory, …) used by your CDF project. Services: Make a list of scheduled functions. Make a list of the extractors in use.