Recently active
I'm using python sdk and wanted to query instances based on a condition using “data_modeling.instances.query” method. i have a view “TimeseriesProperty” and it has a direct relation/reference to "TimeseriesPropertyType" view. Is it possible to have a distinct filter on the result, because multiple “TimeseriesProperty” instances can have the same "TimeseriesPropertyType" instances , we don't want duplicated "TimeseriesPropertyType" instances coming in response for below query: view_id= ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesProperty",version="2_0")v_id_2_PROP = ViewId(space="slb-pdm-dm-governed", external_id="TimeseriesPropertyType",version="2_0")query= Query( with_= { "TimeseriesProperty":NodeResultSetExpression( limit=10, filter= HasData(views=[view_id])), "TimeseriesPropertyType":NodeResultSetExpression( limit=10000, direction="outwards", from_ = "TimeseriesProperty", through = view_id.as_property_ref("propertyType"), filter= HasData(views=[v_id_2_PROP])) } , sele
Welcome to the CDF Fundamentals Discussion! This discussion is dedicated to help learners of the Cognite Data Fusion Fundamentals learning path succeed. If you’re struggling with the exercises in this learning path, try the tips & tricks below or post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
When bringing a file into Canvas, a drop down comes up asking the user to Select a Space. If the correct Space isn’t selected, then the file can’t be added. Users shouldn’t need to select a space, it should be defaulted.!--scriptorstartfragment--> !--scriptorendfragment-->
Hi, I have been trying to use the sync API to try and pull changes from a data model - specifically node instances from a view. When testing, some fields within a view was changed / updated but I was not able to pull anything new using the sync API. This was after I created a baseline of the full dataset using the Query endpoint. The sync endpoint did work when new nodes were added to the view, it would only pull the new row, so I think my query is correct. Is there some sort of parameter to turn on / other way to use the sync API to retrieve changes and new additions to instances?
I would like to suggest a new feature on "list datasets" endpoint. It would be extremely helpful to have the count of files (of specific MIME types) included as a property in the response. Currently, as an example, to determine the number of .pdf and .docx files in a specific dataset, we would make two separate requests to the files.aggregate() endpoint - each filtered by the datasetId and the mimetype (one request for each MIME type). This approach becomes inefficient when we need to evaluate this information across many datasets, as it requires too many requests. If we need this information from many datasets, we should to list them, and then retrieve the count of files available through files.aggregate(). The proposed solution is to enhance data_sets.list() endpoint by adding a 'files' filter, allowing us to send an array of MIME types. The response would include the datasets that contains that specific file types, along with the files counts. Thank you in advance!
With the recent update, it is now possible to add tables to Canvas. Currently, I believe you can only input text into the tables,but is it possible to embed asset information, timelines, and files into the tables? If this becomes possible, it would allow for a more organized dashboard in Canvas, greatly expanding its usability.
This group is dedicated to anyone working with - or simply fascinated by - robotics in industrial environments. Whether you’re deploying drones for inspection, using ground robots for autonomous tasks, or exploring subsea operations with ROVs, this is the space to share your experiences, challenges, and insights. We aim to explore the current capabilities of industrial robots and push the conversation toward what’s coming next: from agile ground-based platforms and aerial systems to manipulation robots and even humanoids. How far can robotics go in transforming work in complex environments like offshore energy, manufacturing, or heavy industry? A central theme in our discussions will be the connection between robotics and AI, with a special focus on Embodied AI - where intelligence is not just computational, but physically situated in a body that perceives and acts in the real world. How do advances in perception, decision-making, and autonomy enable robots to operate more intelligentl
Hi team, I want to import some functions from util folder which is same directory of the folder in which handler.py. As files in util folder is being used by other cognite functions too, I can’t keep the files of util folder in function_folder but at the time of importing functions from files of util folder and after deploying cognite function it gives import error as so basically structure is Repofunction_folder- file1 and handler.pyutil_folder - file2 and file 3 And I want to use methods from file 2 into handler.py and I have added __init__.py in util folder 2025-04-04T12:17+05:30 : Traceback (most recent call last): 2025-04-04T12:17+05:30 : File "/home/site/wwwroot/function/_cognite_function_entry_point.py", line 297, in import_user_module 2025-04-04T12:17+05:30 : handler = importlib.import_module(handler_module_path) 2025-04-04T12:17+05:30 : File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module 2025-04-04T12:17+05:30 : return _bootstrap._gcd_import(name[
We are excited to share our quarterly training updates on Cognite Academy with you! Over the past quarter, we’ve added new and engaging courses, made improvements to our learning experience, and continued to build content that empowers our community of customers and partners. Check out what’s new below! 5 Microlearning courses on Generative AI This microlearning series introduces key concepts of Generative AI for industry—from foundational knowledge to practical applications. Learn how to implement secure AI solutions, build contextualization pipelines, and explore real-world use cases, all while understanding the strategic value AI brings to industrial operations. Charts - Configuring alerts and notifications Learn how to configure alerts and notifications in Charts. Improving RCA (Root Cause Analysis) with Canvas Discover how a data-driven maintenance approach empowers you to proactively address asset issues, streamline decision-making, and reduce production downtime. Upload a File t
Hello As our industrial data evolves; we're facing challenges with schema versioning specifically when modifying / extending the properties of existing asset or time series types in Cognite Data Fusion (CDF). 🙂For instance; adding new fields or changing data types (e.g., integer to float) in metadata often breaks downstream pipelines or applications that expect the original structure. CDF doesn’t enforce a strict schema, but how do others handle backward compatibility in real-world use?😐 We’ve experimented with versioned types and custom labels to signal changes, but this quickly becomes hard to manage at scale. Some of our consumers rely on fixed field names, and introducing Salesforce Developer Course breaking changes results in unexpected behaviors in the SDKs or fusion apps.😐 Is there a recommended practice for managing schema evolution that preserves data integrity while enabling flexibility for growth?🤔 Looking to hear how other developers are addressing this challenge—do you
Hi, can this feature be added to my company projects? Henrik
Hello, As per the docuementaiton Pygen exposes a `graphql_query` method. However, I’m not able to find it. Could you please help? version used: cognite-pygen==1.2.1 Thanks
Hi everyone, I’m Liz, and just joined Cognite as a Senior Project Manager. I have a background in Civil Engineering and IT and I’m hoping to gain a good understanding of CDF so I can help deliver successful projects to our customers.
Important Update: Removal of Alpha Features in Data Modeling user interface Dear Users, We are writing to inform you about an important update regarding the alpha features in the Data Modeling user interface. Over the past year, we have been testing several alpha features to enhance your experience. However, after careful consideration, we have decided to deprecate the following features: Population Smart Suggestions Knowledge Graph What does this mean for you? The feature flag for these alpha features will be removed with the Q2 2025 release. For projects that currently have these features activated, the functionality will no longer be available, and the experimental feature button in the Data Modeling user interface will be empty after that date. For new projects or users who have never activated these features, the only noticeable change will be that the experimental feature button remains empty. You can achieve similar experiences to the Population and Smart Suggestion experimental
Hello, While testing data gouvernance feature per space in the data model service, we noticed that it does work on the instance level but not on the data models level. Here are the tests conducted: Test 1: We tested giving access to users to only instances that are in a specific space but applying the following ACL : - dataModelInstancesAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['test'] } When requesting the data model, only instances in the test space are returned. ACL works on instance level. Perfect! Test 2: When applying a similar ACL on the data model level : - dataModelsAcl: actions: - READ scope: spaceIdScope: { spaceIds: ['not_that_important_data_models'] } In this case, we notice that we get revoked access to all data models, even data model in not_that_important_data_models space. We expected that we would be able to access only data models in not_that_important_data_models. Could you please check ? Thanks
Hi, I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads. Are there any CDF admin dashboards, telemetry APIs, or audit logs available for this purpose? Please advise. Thanks
Streamline Industrial Workflows and Collaboration with Extensible AI Agents Cognite’s latest product release furthers its leadership in industrial AI and DataOps by increasing the usability and flexibility of AI agents for AI-powered insights The newest release of Cognite’s industrial agent workbench, Cognite Atlas AI™, allows users to easily customize AI agents and increases the accessibility of real-time data with REST API support. Additionally, the latest release of Cognite Data Fusion® enhances collaboration amongst industrial users for troubleshooting and root cause analysis in Industrial Canvas, and accelerates production optimization workflows with Petex simulator integration. Transform traditional industrial workflows with AI-powered insight With the release of the REST API agent tool for Cognite Atlas AI, AI agents now have access to the most relevant, up-to-date information in Cognite Data Fusion’s industrial data foundation. Previously, Atlas AI relied upon pre-built configu
Hi everyone, I’m Guillermo, an IT professional with extensive experience Oil & Gas industry and cloud solutions. I’m excited to deepen my understanding of CDF through the Project Manager track and look forward to learning from the community and contributing where I can. Best regards, Guillermo
We are creating various types of templates in Infield, such as routine equipment inspection checklists, equipment checks, and plant start-up checklists that are only carried out during plant commissioning.The checklists issued from these templates are currently displayed mixed together, but if filtering can be done according to the type of template (for example, route inspections, equipment checks, start-up checklists, etc.), inspectors will find it easier to select the checklist, and it will also be easier to search for the desired checklist later when reviewing.
Hi Team, This request is to include time series’ UnitExternalId on UI/front-end under the time series information in the data explorer/search. Currently, it is visible using transformations, APIs or SDK. But someone who is not adept to use transformations, APIs or SDK i.e. the target persona is an SME, displaying the UnitExternalId on the UI is more suited. Thanks, Akash
As a part of the ‘democratization of data’ paradigm, it would be very useful for end users with limited data proficiency to access the data they need through their trusted Microsoft Excel. While it is possible to connect CDF data to Excel through OData feeds, this can be a boundary for many users. Therefore, it would be very good with a CDF Excel add-in where users can sign in with their Entra-ID and access the data they need for whatever use they have. If this can be kept as simple as possible, it would be a lot easier to onboard even conservative users.
When performing an activity (task) in Infield, you can select the + sign to add images or videos. It would be nice to be able to add other types of files like PDFs, word documents, etc, instead of images/videos only. Specially in Infield Desktop version.
Creating a product request for @msibs90 request for User insights. “I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads.” KAES would also like to have this data available for us to consume and use as we continue to report to our leadership on the value Cognite is creating and the journey we have been on. We currently get this from our customer success partner, but having this available via API would be most idea so we have more control and can build KPIs specific to us. This also takes the burden off the customer success manager to build and co
Currently in our Charts instance, if I want to add additional PI-Tag data from the relevant P&IDs, by clicking on the P&IDs icon, it doesn’t automatically load the drawing. I have to select the correct category and then open the correct P&ID. This will be difficult for the end users to navigate.
I am using the [Cognite Data Fusion Python SDK](https://cognite-sdk-python.readthedocs-hosted.com/en/latest/data_modeling.html#delete-instances). While deleting a node, e.g.... ```pythonclient.data_modeling.instances.delete(NodeId(space='some-space', external_id=deleted_node_id))``` I want to also delete the edge(s) from the parent in order to avoid leaving orphan edges in the database. **Problem**: how can I look-up such edges? I know how to get all edges (and then filter by the `end_node` attribute), but that would be very inefficient: ```pythonedges = client.data_modeling.instances.list( instance_type="edge", limit=-1)``` How could I construct the `Filter` for the `end_node` (to match the `deleted_node_id`) and ideally the `type` (to equal `Child` type)? E.g. ```pythonedges = client.data_modeling.instances.list( instance_type="edge", filter=And(Equal(???), Equal(???)), limit=-1)``` Because then I could just delete all those edges with one `client.data_modeling.instances.delete(edges
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.