Recently active
Hi, I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads. Are there any CDF admin dashboards, telemetry APIs, or audit logs available for this purpose? Please advise. Thanks
Currently, the only way to restore FDM database data is by opening a support ticket, and even then, restores are only available for specific incident scenarios. This process is manual, time-consuming, and restrictive, limiting our ability to quickly recover from data issues or accidental deletions. Proposed Feature We would like to have a self-service backup and restore feature that allows users to: Create on-demand backups of database data. Schedule automated backups at configurable intervals. Perform partial or full restores of data as needed. Preview or validate a backup before restoring. Benefits Faster Recovery: Immediate access to backups without waiting for support intervention. Operational Efficiency: Reduces downtime and manual effort required for data recovery. Greater Flexibility: Enables controlled rollback of data changes in non-incident scenarios. Consistency with Other Cloud Providers: Many cloud platforms offer self-service backup/restore tools, making this a standard e
Hello everyone! I’m Sara, and I work as a consultant. I’m here to learn more about Cognite, especially how CDF can enhance my work and contribute to my career growth. Looking forward to connecting! Cheers,Sara
Hi, I have been trying to use the sync API to try and pull changes from a data model - specifically node instances from a view. When testing, some fields within a view was changed / updated but I was not able to pull anything new using the sync API. This was after I created a baseline of the full dataset using the Query endpoint. The sync endpoint did work when new nodes were added to the view, it would only pull the new row, so I think my query is correct. Is there some sort of parameter to turn on / other way to use the sync API to retrieve changes and new additions to instances?
I would like to know what to fill in BI to be able to connect on CDF. I tried to use that how on the course, but gives a error to add the organization name. publicdata?tenantID=48d5043c-cf70-4c49-881c-c638f5796997
When performing an activity (task) in Infield, you can select the + sign to add images or videos. It would be nice to be able to add other types of files like PDFs, word documents, etc, instead of images/videos only. Specially in Infield Desktop version.
We are excited to announce a significant enhancement for production optimization workflows: our integration with the PETEX PROSPER simulator is now Generally Available (GA)! This marks a key milestone in providing robust, production-ready tools for our customers. Alongside this, development continues on the PETEX GAP simulator integration, which remains available in Beta. The Challenge: Many organizations struggle to effectively utilize their rich, contextualized operational data with specialized simulation tools like those from PETEX. This often creates data silos, hinders real-time optimization efforts, and increases manual workload, impacting overall efficiency in production environments. Cognite Data Fusion® Integration for PETEX: Cognite Data Fusion® excels at contextualizing vast amounts of industrial data, providing a single source of truth. To bridge the gap between this rich data foundation and engineering simulations, we now offer direct integrations connecting CDF with PETEX
Hello, I’m having this error while using CDF latest Spark datasource. I was only able to make it work using 2.1.10 version (com.cognite.spark.datasource:cdf-spark-datasource_2.12:2.1.10). Could you please take a look? Thank you !
We need Cognite to provide a PowerBi conenctor that is not just a batch import to PowerBi but rather a LIVE POWERBI connection of data that provides instant updates do dashboards in near real time. This is CRITICAL request from CELANESE
Hello, While testing data gouvernance feature per space in the data model service, we noticed that it does work on the instance level but not on the data models level. Here are the tests conducted: Test 1: We tested giving access to users to only instances that are in a specific space but applying the following ACL : - dataModelInstancesAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['test'] } When requesting the data model, only instances in the test space are returned. ACL works on instance level. Perfect! Test 2: When applying a similar ACL on the data model level : - dataModelsAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['not_that_important_data_models'] } In this case, we notice that we get revoked access to all data models, even data model in not_that_important_data_models space. We expected that we would be able to access only data models in not_that_important_data_models. Could you please check ? Thanks
Problem Overview: When generating a Python SDK notebook using pygen with a NEAT-created data model, you might encounter the following error: UndefinedError: 'cognite.pygen._core.models.fields.connections.OneToOneConnectionField object' has no attribute 'data_class' Root Cause This error typically occurs when one or more properties in your data model are missing the "Name" field in the NEAT spreadsheet under the "Properties" tab. Although NEAT allows you to create and save a data model without filling in the property name, this leads to missing metadata when pygen tries to generate code — specifically, it fails when trying to reference data_class on an incomplete field definition. When It Happens You’ll see the error when calling: pygen = generate_sdk_notebook(data_model_id, client) ✅ How to Fix It Open NEAT spreadsheet to the Data Model causing the issue. In the "Properties" tab, ensure that every row has a value filled in for the "Name" column. If any "Name" values are missing,
Root Cause Analysis (RCA) is a method used to identify the underlying causes of equipment failures. The goal of RCA is to understand why failures occur so that measures can be taken to prevent their recurrence. The Industrial Canvas is particularly useful for conducting RCA by providing a structured environment to gather data and identify the underlying causes of equipment failures. The Cause Map agent further enhances the user experience by guiding the retrieval of relevant data and suggesting a cause map aligned with ISO 14224. It is especially useful for personnel such as site reliability engineers and has proven to significantly reduce the time spent on RCA. This how-to article shows how to implement an Atlas AI agent that can be adapted and extended to suit the particular needs of customers. Requirements: Atlas AI must be enabled in your Cognite Data Fusion environment. Contact your Cognite Customer Business Executive or contact@cognite.com for more information. Equipment data mus
Hello, We are using cdf toolkit to deploy our container, views and data models. We recently added many new views, these views are dependent. i.e: they reference each other in the properties through a source block or via implements block. Here an examples of these view dependencies: - externalId: View1 implements: [] name: View1 properties: test: container: externalId: Container1 space: some_space type: container containerPropertyIdentifier: Test name: test source: externalId: View2 space: some_space type: view version: v1 space: some_space version: v1 - externalId: View2 implements: [] name: View2 properties: wow: container: externalId: Container2 space: some_space type: container containerPropertyIdentifier: Test name: wow space: some_space version: v1 or - externalId: Country implements: - externalId: CountryAttributes space: '{{sp_dm_dap_knowledge_graph}}_wv' type: view version: '{{dap_version}}' name: Country properties: Wells: connectionType: multi_reverse_direct_relation name: We
In Cognite Academy Fundamentals : Working With CDF: Contextualize, I have followed all of the instructions several times for Entity Matching and, in all instances, get “No matches found”, so there is nothing to confirm or write to CDF. What am I missing here?
Please add a feature where you can select all when filtering on a “is equal to”. Similarly to how you would do it in excel. This is extremely helpful when trying to obtain a specific data set out of your search. !--scriptorstartfragment--> !--scriptorendfragment-->
When bringing a file into Canvas, a drop down comes up asking the user to Select a Space. If the correct Space isn’t selected, then the file can’t be added. Users shouldn’t need to select a space, it should be defaulted.!--scriptorstartfragment--> !--scriptorendfragment-->
This feature request is to implement email notifications for transformation errors within workflows.
Building upon the recent enhancements of waypoints and ghosting, the ability to walk a user to a location from their current location or a selected start point is a use case for our contractors during daily maintenance or Turnaround activities. Often contractors have never been onsite and navigation to the work area or a specific asset would be helpful. Think google maps walking feature or a line that connects scan points in the 3d model to guide the user to the location.
Allow custom apps to be accessible in the InField Mobile view. This would allow customers to create and use a large variety of custom applications for users to easily access while in the field, while not having to leave the InField application.
Using PBI Desktop and Cognites Rest API Connector beta When configuring the GraphQL parameters of Space, Datamodel, version and query There is a problem when pasting a query that was copied from the Query Explorer or even notepad into the query input box It is truncating the query string at the first carriage return CR and or Line Feed LF If all CR and LF are removed from the query string, it will paste properly and run properly.
I am new bee for this mammal, to Setup the Cognite file extractor for the local file upload to CDF, how to I config the environment variable in the yaml?according to the example-local.yaml I need to read:COGNITE_BASE_URLCOGNITE_PROJECTCOGNITE_CLIENT_IDCOGNITE_TOKEN_URLCOGNITE_CLIENT_SECRET I am little confuse, should I create a .env file to have all of those environment variables in that file and put it in the same folder as the yaml config file or can I inject those variables in the configuration file itself, if so any example? need someone lighting me up.
Hi. I’d like to see more aggregation functions on the grafana data source. Currently only average, interpolation and step_interpolation are allowed. Any chance to implement support for Sum, Count, etc.? Error: count is not one of the valid aggregates: AVERAGE, INTERPOLATION, STEP_INTERPOLATION https://docs.cognite.com/cdf/dashboards/guides/grafana/timeseries
I think a map-like interface would be a really useful add-on to the search-experience in CDF. How about putting a minimap-tile on each asset that has a geolocation set, similar to what exists for images in “Vision”? It should be possible to also expand this to an additional UI where a map is used as entry point to find data.
Use Case I am building a dashboard in Power BI that will visualize Events from CDF (about one million events per year). So to make the solution scale I want to use incremental refresh of the semantic model so I only refresh the model with the newest Events since the last refresh. I have followed your tutorial on Incremental refresh (which seems copy/paste of Microsoft’s tutorial), but I still have questions: What happens if the StartTime attribute of an Event is a date/time/zone when loaded into the model? The attribute needs to be a date/time type, which means I need to convert the type first. How will the incremental refresh work if the model needs to load the data in order to convert the type, and then apply the RangeStart/RangeEnd filters? Can I use Incremental refersh with the new REST API Connector? When I load events with the new connector I get the start times as number of milliseconds since epoch. So the type conversion from above also applies here. Thanks for your help! Ander
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.