Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
Ability to import a checklist template from a word or excel file (some sort of required format/template) instead of in UI only. Or pull from CDF or source.
When creating a “Point of Interest” in Scene, after it’s created, you need to click on an area that doesn’t overlap with the point cloud (the dark blue area in the screenshot) to display its details.This means you have to zoom in quite a lot to avoid overlapping with the point cloud, which makes it difficult to check the “Point of Interest” while viewing the entire point cloud.Therefore, it would be helpful if “Point of Interest” could be made easier to click and select, or if Scene could include a mode that allows selecting only “Point of Interest.” This would make the feature much more practical.
Dear Cognite Community,One of the repetitive tasks I have noticed in InFieldIs that some checklist items are the same for multiple templates,Or that some templates need same task added in an update ( like additional requirement update).Currently we can duplicate tasks within a single template & duplicate a whole template,But, if we want to drag & drop / copy tasks/ grouped tasks between templates, there is no option to do that,& they have to remake the same task in every template they need .I hope that this idea is put into consideration in future release of infield - as that simple addition will greatly help end users.
Hello,As seen in several topics :Cognite Hub - Alert/Notification when MQTT Hosted Extractor is down Cognite Hub - MQTT Hosted Extractor ErrorCould it be possible to configure kind of alert or notification with rules on topic filter status ?A timeline of last 72h is already computed. It could be very usefull to receive something (mail, notification) on custom trigger to avoid silent data loss. Regards,Pierre Rambourg
With the use of Cognite Core Datamodel, we can create different types of Assets. Right now I have a case where I have EquipmentTag and Functional Locations.An EquipmentTag always has a FLOC a parent. So the Hierarchy is always:FLOC1-- FLOC1.1----EquipmentTagFLOC2--EquipmentTagIn this case, If I open the EquipmentTag, te UI is trying to get all the hirarchy as EquipmentTag, so the UI knows that exists a path, but can’t create the tree-view cause no data is returned. If I open the same EquipmentTag just as an Asset, the tree-view works fine, but then I lose all the specific data from my EquipmentTag.Openning as EquipmentTag
I’m creating this feature request on behalf of @Smitakshi Sen In CDF Data Explorer, it should be possible to download a filtered list of events into Excel or CSV format with all the selected metadata columns.For example, when filtering events by type “Failure”, CDF returns the list of filtered Failure events, however, that list cannot be exported for audit and reconciliation.
In Infield 1.0 the user was able to print the summary report of the checklists. The user would like to have the export as a pdf to upload to other software and email the reports if needed.
User story:As a developerI want to produce documented and transparent codeso that my end users and colleagues can understand what is going on. Current stateFor deployed Cognite Function we see three tabs: Calls, Schedules, and Details. Documentation is limited to the description field. Desired stateI would like to see four tabs: Calls, Schedules, Details, and Documentation/ReadmeI would like to be able to write a README markdown file that I package together with the handler, requirements and all the rest. This should be picked up and rendered in a dedicated tab for the function. I think this would really increase transparency in the deployed functions. All employees have access to CDF, but not all have access to source code repositories where documentation usually lives. The deployed Function is a black box at the moment.
We would like to have a contains filter option on columns. This would be useful to search on string type columns and see if it contains some substring, to filter the data properly.
Need the ability to filter time series data on "Last Reading" column in search. The field is not Filterable or Sortable. Neither is it configurable in the Search Admin.
Hello, Today if we reference a node in a data model instance to create a relationship between two nodes, the referenced node existance is not validated at insertion level. In order to enforce data quality, do you envision to add a “strict mode” at the DMS level that checks for referenced node existence before creating the reference? i.e:Running this query results in a successful transformation runselect 'test' as externalId, 'test' as space, node_reference('space', 'external_id_that_does_not_exist') as Child, from staging_tableThanks!
Hi,Would it be possible to extend the ReverseDirectRelation concept to populate from a DirectRelation list, and not just a DirectRelation ? Best regards, Olivier
I am using Cognite SDK where I need to count instances present in a given View. I can list all instances but thats heavy response and might be heavy execution backend side as well. I need only total count of instances which if run on cognite side and return in numbers may be lighter and quick as well. It would be good if I can apply some filters as well and get respective count.
Customers using the AI Agent can generate insights in the form of:PNGs (charts, visualizations, cause maps) Tables (structured results)However:There is no supported, documented way for the AI Agent to directly push these outputs into Canvas. Users must manually download images or recreate tables, then upload or rebuild them in Canvas. This breaks workflow continuity and reduces the usability of AI Agent outputs in collaborative analysis.Proposed EnhancementIntroduce a native capability for AI Agents to:1. Push generated PNGs directly into Canvas as embedded visuals.2. Push generated tables into Canvas, ideally as:A Canvas DataGrid (when mature), orA structured table block that preserves rows, columns, and formatting.This could be exposed as:An “Add to Canvas” / “Send to Canvas” action from the AI Agent output. A programmatic API available to agents and workflows.
Inspiration“Context is king in the world of AI.”Across research, publications, and industry discussions, one theme consistently stands out — AI without context lacks true intelligence. To unlock the full potential of Industrial AI, we must ground AI solutions in process context.VisionIntroduce Process-Aware Knowledge Graphs (PAKGs) that integrate process understanding directly into the Cognite Data Fusion (CDF) ecosystem. By capturing and structuring the interconnections, interdependencies, and material flows from Process Flow Diagrams (PFDs), we enable context-driven intelligence for Agentic AI solutions built on Atlas AI and CDF.Core Capabilities System Model Extraction Automatically extract process metadata from P&IDs and PFDs (PDF/Image formats). This removes the dependency on CAD files, which are often unavailable or inconsistent. Process-Aware Knowledge Graph Generation Translate the extracted system model into a Knowledge Graph enriched with process semantics. Represent equipment, process streams, and control loops as nodes and relationships, creating a foundation for process discovery, reasoning, and autonomous insights. Value Proposition Enables Agentic AI systems to reason over process context. Accelerates ROI realization from Cognite solutions by improving AI explainability, traceability, and domain relevance. Lays the groundwork for next-generation Industrial AI applications — from automated root cause analysis to process optimization. AskI propose enhancing CDF to support this capability natively, creating a bridge between engineering documentation and context-aware AI models.
Problem StatementCognite Data Fusion (CDF) offers a powerful suite of tools for industrial data operations, but its adoption remains limited to highly technical users such as data engineers, data scientists, and developers. Today, creating data transformations, writing functions, deploying models, and generating insights in CDF typically requires:Knowledge of Spark SQL for transformations Python programming for custom functions Understanding of data modeling concepts Manual deployment and orchestrationThis steep technical barrier restricts broader usage, particularly among domain experts like production operations engineers, maintenance supervisors, or process owners who possess deep contextual knowledge but lack coding skills. As a result, CDF usage and ROI are throttled by dependence on a small pool of technical resources. VisionEmpower every domain expert to become a CDF power user — without writing a single line of code. Proposed Solution: Cognite vision – AI-Powered No-Code ExperienceIntroduce Cognite VISION, an out-of-the-box AI agent integrated into CDF that uses LLMs to eliminate the need for coding expertise.With VISION, a user can simply ask:"Join sensor data from the compressor with maintenance logs and create a dashboard to predict downtime every 6 hours."VISION handles the rest:Interprets the intent using an LLM Writes Spark SQL transformations behind the scenes Creates and deploys Python functions for processing or inference Builds contextualized data models Schedules pipelines Deploys insights to dashboards or external appsAll within seconds, fully auditable, and explainable for enterprise transparency. Key Features Natural Language Interface: Ask for transformations, models, or dashboards in plain language Automatic Backend Generation: LLMs write code, configure parameters, and deploy pipelines One-Click Deployment: From request to production in a few clicks or a single prompt Insight Builder: Automatically recommends and generates insights based on domain context Governed Execution: Every AI-generated artifact passes through existing governance and logging frameworks
Hi Team,This request is to include time series’ UnitExternalId on UI/front-end under the time series information in the data explorer/search.Currently, it is visible using transformations, APIs or SDK. But someone who is not adept to use transformations, APIs or SDK i.e. the target persona is an SME, displaying the UnitExternalId on the UI is more suited.Thanks,Akash
One of the customer would like to see the preview of 2d plot plants or PIDs in Search just like we see the panel for 3d models.
I hear that the Charts monitoring feature includes logic to prevent excessive and unnecessary alerts.Specifically, time series data uploaded at intervals longer than one minute is excluded from monitoring, and alerts are not triggered for such data.The reason for this is that the system cannot determine whether a lapse in recent values over a certain period is due to “legitimate low-frequency data” or “a system error causing missing data.” On the other hand, we have some time series with very low update frequencies.For example, we have data that is ingested only once every half day (see figure below).With such data, the Charts monitoring feature cannot detect anomalies and alerts are not triggered.*We have no plans to increase the number of acquisitions due to wireless system specifications. Could you consider adding a switch to the Charts monitoring feature that allows users to disable the current specification that excludes data uploaded at intervals longer than one minute from monitoring?I believe other industries also have data that is captured only once per day.Allowing users to toggle this behavior as needed would minimize the impact on backend of Cognite Data Fusion. This request comes directly from field engineers, so I would greatly appreciate your consideration.
Is it possible to have a set of pre-built templates for various disciplines or generic use cases, which a new user can directly utilize for preliminary work, until he get trained enough to start building his canvas. Some of the users like to use the industrial canvas like a dashboard and for them having a pre-built template gives an idea of how to use the canvas.
We recently did a backup of all configurations for a solution in both our dev and test project. For this we used `cdf dump`. The command gave us alot of options on what to dump. And for each resource, we needed to provide an id. However most of the resources we used were scoped to a cdf-group.So the tool was built with a group as a basis for fetching all the resource id’s and constructing the `cdf drump` commands. So my feature request is to have an ability to dump all resources scoped to a cdf group with the `cdf dump` command. So you would only specify the name of the cdf-group to base the resource dumping on. This would simplify backup and dumping alot
Today, the PI extractor can only write time series instances to a single target space per deployment, even when the underlying PI server contains data for multiple sites or governance domains. The only practical workaround is to run many PI extractor instances against the same PI server, each with different tag filters and a different target space, which is hard to scale and operate. [Governance & spaces; Multi-space limitation]I would like the PI extractor to support multiple target spaces from a single deployment, where the space is selected per time series based on configurable filters. Typical examples would be routing by tag name prefix or pattern (for example, ABU* to space site-abu, RUW* to space site-ruw), or by PI point attributes / metadata mapped to specific spaces. [Filter-based routing idea; Enterprise scaling concern]. This capability would avoid both the operational overhead of many parallel extractor instances.
Celanese is asking for a UI that they can see data and views and their fields within CDF. They currently use the data modeling instances page as the UI to see the data but as they migrate to S&R, they do not want to lose that functionality all together (i.e. they mentioned a year is too long to have this functionality). While it was mentioned that the streamlit app is available, they would like to have a counterpart UI to what we have in data modeling for S&R.
In our 360 images, the circles indicating shooting positions are quite large, and when multiple markers are displayed, they tend to obstruct the view.I have several ideas, but the most practical and system-friendly suggestion would be to make the “invisible distance” setting stricter.It would be even better if this parameter could be configured on the system side. Currently, I feel like markers are visible from a very long distance.Other ideas include enabling size adjustments for the circles or adding an option to toggle their visibility on and off. However, I believe the first suggestion would be the easiest to implement. Note: Due to internal security reasons, we cannot share 360 plant images online. For illustrations, I’m using a black image during loading. Please imagine your own plant with your mind’s eye! 😊
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK