Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
Hello Cognite Support Team, I'm working with the Cognite Data Points API and would like to request a feature enhancement regarding aggregation functions. Current Situation: When querying time series data with a specific granularity (e.g., "1d" for daily), the available aggregates (Sum, Average, Count, Interpolation, StepInterpolation, etc.) don't directly provide the first or last actual data point within each granularity period. Feature Request: Could you add first and last aggregate functions that would:first: Return the earliest data point (by timestamp) within each granularity period last: Return the latest data point (by timestamp) within each granularity period Use Case Example: For a time series with granularity: "1d" and aggregates: ["first"], the API would return the first recorded value for each day (e.g., the value at 00:00 or the earliest available timestamp that day). Similarly, aggregates: ["last"] would return the last recorded value for each day (e.g., the value at 23:00 or the latest available timestamp). Current Workaround: Currently, we're fetching hourly data (granularity: "1h") and then manually filtering/grouping to extract the first or last value per day, which is less efficient for large datasets. Question: Are there any plans to add native first and last aggregate functions? If this feature is already available through a different approach, I'd appreciate guidance on the best practice.
Need the ability to filter time series data on "Last Reading" column in search. The field is not Filterable or Sortable. Neither is it configurable in the Search Admin.
Customers using the AI Agent can generate insights in the form of:PNGs (charts, visualizations, cause maps) Tables (structured results)However:There is no supported, documented way for the AI Agent to directly push these outputs into Canvas. Users must manually download images or recreate tables, then upload or rebuild them in Canvas. This breaks workflow continuity and reduces the usability of AI Agent outputs in collaborative analysis.Proposed EnhancementIntroduce a native capability for AI Agents to:1. Push generated PNGs directly into Canvas as embedded visuals.2. Push generated tables into Canvas, ideally as:A Canvas DataGrid (when mature), orA structured table block that preserves rows, columns, and formatting.This could be exposed as:An “Add to Canvas” / “Send to Canvas” action from the AI Agent output. A programmatic API available to agents and workflows.
Inspiration“Context is king in the world of AI.”Across research, publications, and industry discussions, one theme consistently stands out — AI without context lacks true intelligence. To unlock the full potential of Industrial AI, we must ground AI solutions in process context.VisionIntroduce Process-Aware Knowledge Graphs (PAKGs) that integrate process understanding directly into the Cognite Data Fusion (CDF) ecosystem. By capturing and structuring the interconnections, interdependencies, and material flows from Process Flow Diagrams (PFDs), we enable context-driven intelligence for Agentic AI solutions built on Atlas AI and CDF.Core Capabilities System Model Extraction Automatically extract process metadata from P&IDs and PFDs (PDF/Image formats). This removes the dependency on CAD files, which are often unavailable or inconsistent. Process-Aware Knowledge Graph Generation Translate the extracted system model into a Knowledge Graph enriched with process semantics. Represent equipment, process streams, and control loops as nodes and relationships, creating a foundation for process discovery, reasoning, and autonomous insights. Value Proposition Enables Agentic AI systems to reason over process context. Accelerates ROI realization from Cognite solutions by improving AI explainability, traceability, and domain relevance. Lays the groundwork for next-generation Industrial AI applications — from automated root cause analysis to process optimization. AskI propose enhancing CDF to support this capability natively, creating a bridge between engineering documentation and context-aware AI models.
Hi Team,This request is to include time series’ UnitExternalId on UI/front-end under the time series information in the data explorer/search.Currently, it is visible using transformations, APIs or SDK. But someone who is not adept to use transformations, APIs or SDK i.e. the target persona is an SME, displaying the UnitExternalId on the UI is more suited.Thanks,Akash
One of the customer would like to see the preview of 2d plot plants or PIDs in Search just like we see the panel for 3d models.
I hear that the Charts monitoring feature includes logic to prevent excessive and unnecessary alerts.Specifically, time series data uploaded at intervals longer than one minute is excluded from monitoring, and alerts are not triggered for such data.The reason for this is that the system cannot determine whether a lapse in recent values over a certain period is due to “legitimate low-frequency data” or “a system error causing missing data.” On the other hand, we have some time series with very low update frequencies.For example, we have data that is ingested only once every half day (see figure below).With such data, the Charts monitoring feature cannot detect anomalies and alerts are not triggered.*We have no plans to increase the number of acquisitions due to wireless system specifications. Could you consider adding a switch to the Charts monitoring feature that allows users to disable the current specification that excludes data uploaded at intervals longer than one minute from monitoring?I believe other industries also have data that is captured only once per day.Allowing users to toggle this behavior as needed would minimize the impact on backend of Cognite Data Fusion. This request comes directly from field engineers, so I would greatly appreciate your consideration.
Is it possible to have a set of pre-built templates for various disciplines or generic use cases, which a new user can directly utilize for preliminary work, until he get trained enough to start building his canvas. Some of the users like to use the industrial canvas like a dashboard and for them having a pre-built template gives an idea of how to use the canvas.
We recently did a backup of all configurations for a solution in both our dev and test project. For this we used `cdf dump`. The command gave us alot of options on what to dump. And for each resource, we needed to provide an id. However most of the resources we used were scoped to a cdf-group.So the tool was built with a group as a basis for fetching all the resource id’s and constructing the `cdf drump` commands. So my feature request is to have an ability to dump all resources scoped to a cdf group with the `cdf dump` command. So you would only specify the name of the cdf-group to base the resource dumping on. This would simplify backup and dumping alot
Today, the PI extractor can only write time series instances to a single target space per deployment, even when the underlying PI server contains data for multiple sites or governance domains. The only practical workaround is to run many PI extractor instances against the same PI server, each with different tag filters and a different target space, which is hard to scale and operate. [Governance & spaces; Multi-space limitation]I would like the PI extractor to support multiple target spaces from a single deployment, where the space is selected per time series based on configurable filters. Typical examples would be routing by tag name prefix or pattern (for example, ABU* to space site-abu, RUW* to space site-ruw), or by PI point attributes / metadata mapped to specific spaces. [Filter-based routing idea; Enterprise scaling concern]. This capability would avoid both the operational overhead of many parallel extractor instances.
The OPCUA extractor can only write time series instances to a single target space per deployment, even when the underlying OPCUA server contains data from different sources, governance domains or folders which should be handled individually. The only practical workaround is to run many OPCUA extractor instances against the same OPCUA server/hub, each with different tag filters and a different target space, which is hard to scale and operate. [Governance & spaces; Multi-space limitation]I would like the OPCUA extractor to support multiple target spaces from a single deployment, where the space is selected per time series based on configurable filters. Typical examples would be routing by tag name prefix or pattern (for example, ABB* to space site-abb, VAL* to space site-val), or by attributes / metadata / folder structure mapped to specific spaces. [Filter-based routing idea; Enterprise scaling concern]. This capability would avoid both the operational overhead of many parallel extractor instances.This is strongly related to the need of the same functionality for PI Cognite Hub
Celanese is asking for a UI that they can see data and views and their fields within CDF. They currently use the data modeling instances page as the UI to see the data but as they migrate to S&R, they do not want to lose that functionality all together (i.e. they mentioned a year is too long to have this functionality). While it was mentioned that the streamlit app is available, they would like to have a counterpart UI to what we have in data modeling for S&R.
In our 360 images, the circles indicating shooting positions are quite large, and when multiple markers are displayed, they tend to obstruct the view.I have several ideas, but the most practical and system-friendly suggestion would be to make the “invisible distance” setting stricter.It would be even better if this parameter could be configured on the system side. Currently, I feel like markers are visible from a very long distance.Other ideas include enabling size adjustments for the circles or adding an option to toggle their visibility on and off. However, I believe the first suggestion would be the easiest to implement. Note: Due to internal security reasons, we cannot share 360 plant images online. For illustrations, I’m using a black image during loading. Please imagine your own plant with your mind’s eye! 😊
Example document from the pdf preview in fusion. There is a rotate symbol, but the rotate symbol does not rotate the image, but it resets the view to “fit to full page”. How do I rotate the documents previewed? or do i need to download it and do it an do it in a native app? And if not so, can rotation functionality in the document viewer? and maybe change to icon of the rotate symbol and also maybe remember the rotation of the page in the viewer aswell? cause there are often documents and diagrams combind in a single pdf file?
We would like the ability to share charts publicly (great feature) but also have a feature where i can select the specific people to share the charts with me. There should be a feature called “shared with me” similar to google drive concept.
When creating a “Point of Interest” in Scene, after it’s created, you need to click on an area that doesn’t overlap with the point cloud (the dark blue area in the screenshot) to display its details.This means you have to zoom in quite a lot to avoid overlapping with the point cloud, which makes it difficult to check the “Point of Interest” while viewing the entire point cloud.Therefore, it would be helpful if “Point of Interest” could be made easier to click and select, or if Scene could include a mode that allows selecting only “Point of Interest.” This would make the feature much more practical.
When multiple canvas are created in a company, then we need to get some more meta data on each Canvas to ease the search of the right/relevant Canvas.It could be interesting to get the option to insert a description of the Canvas, potentially the main discipline concerned by the Canvas and the main documents mentioned in the Canvas. So then we can develop AI agents to ease the search for Canvas.
Hello,We wanted to check if we can have transaction capabilities when inserting or deleting data in data model in Cognite using SDK.Right now we have requirement to insert or delete into multiple views corresponding to single operation on UI Application, so either operation should be marked as done completely and data should be affected in all views or operation should be failed and data should not be affected partially. So either whole transaction is committed or whole is rollbacked.If it is already available can you share the documentation link.If not available can we take this as requirement. Please share ETA around the same.
The stamp feature in Canvas is difficult to use from users in Japan. The main reason is that the symbols we commonly use are not available and the shapes are not unfamiliar.This stems from the fact that Japan uses a standard called JIS, which defines the standard symbols used in piping and instrumentation diagrams. (In the U.S., this would be like ASME and ANSI.) “Add JIS-defined symbols” could be a solution, but many other countries also have their own standards.Supporting each of them individually would be impractical.Additionally, different industries outside of oil & gas may require different sets of symbols. Therefore, I would like to suggest adding a feature that allows users to upload their own custom stamps.More specifically, we could upload an image with specified a size to a dataset and use it as a stamp.The reason for specifying a size is that very large images would not be appropriate as stamps. Something around 40×40 px seems reasonable. It would be great if you could consider this.
When creating new timeseries via calculations, the user has to manually input the unit of the resulting timeseries. This can lead to human errors and is also a repetitive task.Aim:Depending on the units of the sources and the calculation function, infer the unit of the resulting timeseries. Potentially ask the user for confirmation. Display the timeseries units within the calculation nodes.Value:Expedite the calculation creation Enhance the user experience through automation Limit data entry errors
Subscribing to or unsubscribing from alerts one by one can be cumbersome, even when they belong to the same group. It would be helpful to have a feature that allows bulk subscription of alerts within charts.
Atlas AI – agent building and chatbot notesThe GoodAble to switch between list and tile view as well as Search. Having a Description is helpful. Nice that there are sample questions. Loading feedback. Ability to stop generation. Legal disclaimer near prompt area. Nice that it shows reasoning steps. Like the suggestions with Show More Bot background color is fine. Reasonable area to write a prompt. Great to be able to view the details using the info icon of what each agent is good at before switching to it.Could be improvedPriority 1Starting a new chat is “risky” because there is no history, copy or chat download/export functionality. Part of the point is to aggregate information for analysis later, sharing or collaboration Chat history/multiple threads Need a way to set expectations up front so that when a new chat is started, we can orient users as to what can and cannot be done. Description helps but gets cut off. Sample questions as defaults are not injected but auto triggered. This can be an issue since the prompt is most likely not exactly what they want. Need some sort of prompt library so that users will be presented with prompts that have been vetted, and we are sure will produce relevant answers and get them quickly started. So not just examples but starter prompts. When it suggests follow ups, it would be great if these may be clicked and injected into the prompt area (but not auto submitted). Priority 2I would expect the naming for Atlas Agents to be in Industrial Tools and Agent Library to be in the Atlas AI. Ability to choose to share threads Edit icon should maybe use a plus. I know Copilot uses that but it’s still odd. Not sure what to do about location action and listing. What does this mean or how does it affect bot? Needs link to terms or docs. No file attachment. Users will probably want need this to override on a case by case basis. Publish / Unpublish flow is a bit awkward when prompt engineering between multiple people. Prompt area should be fixed to a certain height (perhaps one half the viewport) and scroll or be resizable. The use of a full takeover modal with a close button is odd when starting a chat. Switching to a new chat when moving between agents without a chat history is somewhat dangerous since users will lose all their work. It’s good that we can switch between agents while in a thread. However, I would expect that when I am in the agent library, clicking on another agent starts a new thread with that agent. Returning to the original agent would show my previous conversation with the agent. Cannot generate tables easily. Although I did manage to do so one time. I easily miss My Agents and Published tab. This causes a starter problem. Would like to add multiple tools at a time and not need to add them one by one.Possible bugsSeemed to lose past chats bubbles after a certain amount of conversation. Reloading shows them. Need to verify. Ran into an issue when switching between an agent within a thread created a new chat.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK