Recently active
type FTAC_ActivityLog { Site: String AddDate: Date ActionComments: String VersionNumber: Int64 UserComments: String Label: String AssetId: String ActivityDate: Date Action: Int64 ActivityLogId: String SortOrderId: Int64 User: String VersionId: String AssetPathId: String} I have a mode as above I want to do the aggregation function for the user per site statistic. how do I prepare my aggregation query, I am new bee for this, by looking for the https://docs.cognite.com/cdf/dm/dm_graphql/dm_graphql_querying aggregation section, quite confuse.
I’ve had the OPC-UA extractor running for a couple of months with no issues. On Feb 19 it stopped pushing data to Cognite properly. According to Charts, it started pushing again on March 6. As I write this it is March 3rd. The data goes out thru June 2025, The host machine w/extractor has been restarted. The OS date/time looks correct. The log file has the correct date/time. I’d appreciate some advice.
How is Agentic AI used in Cognite. Are there any use cases where Agentic AL is being used in the oil and gas industry?
While reviewing the data-modeling-lifecycle documentation available on GitHub for Neat, one of the first steps mentioned is validating the sheet using Neat (a React app demonstrated in a YouTube video). However, I could not find any reference to the Neat UI for validating statements in the spreadsheet. Does this mean that this step can be skipped?
Hi, we have quite a number of timeseries which have externalIds with spaces. Unfortunately this throws off the search algorithm of the data explorer (and also in charts when adding new data). Disallowing fuzzy search (which should not be necessary since we *know* the externalid we are looking for) yields no results. Why allowing fuzzy search for the name will give back almost all timeseries not even remotely related to the timeseries we are looking for. I tried placing quotes around the externalId without any success. Is there any way to protect the spaces in the name or to get the search to match externalIds including the spaces?
Can we use azure document intelligence in CDF documents and if I have CDF access then do azure services are free or not as AAD is assigned while creating account?
Requirements Access to your CDF project. Access to your applications and services. A new Azure Active Directory tenant. Follow the steps carefully to plan and prepare for a successful migration: Step 1: Collect project data Step 2: Configure new IdP identity and access management Step 2.1: Groups and users Step 2.2: Register applications and service accounts Step 2.3: Register the new IdP for the existing domain. Step 3: Access Management Step 4: Enable new OIDC config for the project Step 5: Update CDF transformations, Functions, Extractors, Grafana, Custom Apps, etc. Step 1: Collect project data Collect available data for all the items that you need to migrate: Applications: Make a list of Cognite applications (i.e.: Remote, Infield, …) used by your CDF project. Make a list of third-party applications (i.e.: Grafana, Power BI, Azure Data Factory, …) used by your CDF project. Services: Make a list of scheduled functions. Make a list of the extractors in use. Groups used with CDF: Make
Hello, Could we define default filters, locations, column list in search UI to not redo the same work every time we connect to industrial tools search UI. Could you please transform this into a feature request if this is not available today? Thanks
In order to make it easier to use the data registered in CDF, we propose the development of a data linkage function with Excel.From the data registered in the CDF (mainly time series data), for example, when creating a plant operation report,Right now, I have to download the time series data as a CSV, then copy and paste it into the report, which is very time-consuming.If it is possible to acquire data directly into Excel cells by the Excel linkage function, the utilization of data will be promoted.There is still an Odata function, but it would be best if it was in a more user-friendly form like an Excel function.
Hi Team, Below is a requirement to increase user experience and performance while using transformations. Current problem: As a data engineer, I do not have visibility of how many transformations are scheduled to run at a particular time period. As a result, we just rely on manual check or intuition at what time we must schedule our transformations to not hit the concurrency limit or result in increased load on the cdf instance. Proposed idea: A schedule heatmap between time of day against the count would help in this regard. The time should be divided into a group like 10-15 minutes to analyze how many transformations are scheduled within that time period. The count should also consider the repetitions based on the cron expression passed. Note: This can be extended to functions and data workflows as well @Aditya Kotiyal @Jørgen Lund
Hello team, We have tried to query Entity view which has a property properties of type view Property which is a reverse direct relation. We are using the query endpoint and the instances.query() sdk method to do so. We want the details of properties to come in the select object of Entity. Providing the query below: { "with": { "0": { "limit": 50, "nodes": { "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ] }, { "or": [ { "equals": { "property": [ "node", "externalId" ], "value": "fba80a5d3b994db698e74b77fb96f1de" } } ] } ] } } }, "0_6": { "limit": 10000, "nodes": { "from": "0", "through": { "source": { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Property", "version": "1_7" }, "identifier": "entity" }, "direction": "inwards", "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Property", "version": "1
Hello Cognite Community! I’m Satyabrata Kanungo, a Senior Business Intelligence and Data Services Consultant with a strong focus on data architecture, analytics, and engineering. I have over a decade of experience in the banking, finance, and enterprise data management sectors, specializing in designing and implementing scalable data solutions. My Background & Expertise 🔹 Enterprise Data Architecture – Designing and refining BI models, data integration methods, and scalable data solutions.🔹 Cloud Data Platforms – Extensive experience with Microsoft Azure Data & Analytics PaaS Services, Databricks, and Google Cloud for data engineering and advanced analytics.🔹 Data Governance & Quality – Implementing data quality frameworks, governance architectures, and metadata management.🔹 Big Data & Analytics – Building optimized data pipelines, warehouse solutions, and real-time analytics frameworks to support business decision-making.🔹 Business Intelligence & Visualization – Developing Power
Problem: If one or more of the source-time series are missing data-points they will also be missing in the synthetic time series output. Would it be possible to add an option to supply a default “not available” value for missing data-points which could then be used by the API to calculate a partial result? Source:
This How-to article describes and provides a structured example template for automating the P&ID annotation process in Cognite Data Fusion (CDF). The process leverages CDF Data Modeling and Workflows to automate annotation, linking P&ID documents to assets and other related files within the data model. Overview of the Workflow The workflow consists of two automated functions scheduled and executed using CDF Workflows: Metadata & Alias Processing: Updates or creates metadata and aliases for files and assets. P&ID Annotation Processing: Uses the generated aliases to annotate P&ID documents automatically. The final result is populated annotations in the data model, linking P&ID files to assets and interrelated P&ID diagrams, as illustrated below: Key Features of the Workflow 1. Integrated Extraction Pipeline Both functions are connected to a dedicated extraction pipeline. The pipeline stores overall documentation, configuration, and maintains logging & notifications. 2. Metadata and Alias
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.