Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
Example document from the pdf preview in fusion. There is a rotate symbol, but the rotate symbol does not rotate the image, but it resets the view to “fit to full page”. How do I rotate the documents previewed? or do i need to download it and do it an do it in a native app? And if not so, can rotation functionality in the document viewer? and maybe change to icon of the rotate symbol and also maybe remember the rotation of the page in the viewer aswell? cause there are often documents and diagrams combind in a single pdf file?
Can a simple label be added to Infield so that a user knows which environment they are in? We have 3 different environments and the only way to know which we are in for certain is to log out and log back in. We have had users create templates in our UAT environment before they realized it. Then they had to recreate them manually again in production.A simple label at the top would suffice. This would save us countless times of logging in and out and also make sure users know when they are not in the correct environment. @Andrew Montgomery
Now, when selecting the stacking function in Charts, all the selected timeseries are combined in the same Y-axis. See attached picture. It is reported by the SMEs that this UI is very messy and hard to read properly. Suggestion to improve this would be a feature where the user can select if he/she want to stack the Y-axes separately (see picture). This would help with better readability of the data in the chart.
The File category view is very useful, allowing you to quickly filter on the right files. But some file categories are more important than others, especially the P&IDs. It would be good to be able to configure the order of the file categories in that list, allowing us to eg “pin” the P&ID category on the top
there's an unnecessary amount of filtering options under search. I guess its okay to show the options available, but for several tags there are multiple filters without any hits - why are these shown? The number of filters is already many, some of them with descriptions not too accurate (but I guess they are for some people...) This is mainly for trying to remove some noise from the filtering functions. An explanatory information document for the filter options would also be nice. many people who will be trying to filter will really struggle to know what to filter on.Can hide properties from filter if none of the values are populated.
When selecting an instance in Search, the Overview tab is very useful (seeing more than one data category without having to switch view). The challenge today is that the tiles have pre-defined columns, and they do not allow for filtering. It is of course possible to go into the “full screen mode” for each category, like Activities, but then you loose the possibility of seeing data from more categories in context.It would be very useful to be able to configure the properties we see in each tile and to filter on properties, like we can do in the “full screen mode”.
With the data modelling framework, most models will end up with a significant number of Views. Today we are not able to control the order the Views are presented in a good way. By reverse engineering it seems like the order of the Views is based on The order of the (first mentioned) CDM extension in the model definition. Alle non-CDM Views are listed at the end, irrespectively whether it is listed before CDM or not What is considered a CDM extension by the UI is not based on the implements, but whether it reference at least one property from one of the CDM containers Inside each Category (eg, a specific CDM extension, or the non-CDM Views), the order in the model definition dictates the orderIf I choose alphabetic sorting it seems to only affect the non-CDM-Views, and not the order inside each CDM category. It do however put all the CDM categories first or last It is very inconvenient to use the order of appearance in the model definition as the way of sorting (we have to sort both the CDM extension, and then inside each category) since any change in the order will require pushing a new model definition to CDF.We need the ability to define the order it appears in the UI, independent of how it is listed in the model definition, and we need to be able to create groupings that do not follow the CDM extensions. Eg, we have maintenance information that are not a CogniteActivity extension, that we still want to group next to the other CogniteActivity extended maintenance information.The CDM based grouping is more relevant from a model developer point of view, and not that much from an end user point of view.We also need the ability to define what is the default View. Right now it seems to be the first View in the model definition that is a CogniteAsset extension.When changing location filer, it always reset to the (non-configurable) default, which we see create a lot of initial confusion with the users. Since the CDM Views are not explicitly described in our model, we do not see the CDM based grouping.
When selecting an instance in Search, the Overview tab is very useful (seeing more than one data category without having to switch view). However, the tiles are static and cannot be resized. Resizing would allow the user to see more of the data in one category while still see the other data categories (eg I want to see more of the properties from the Asset while still seeing the list of workorders.For the same reason we would like to be able to change the order of the tiles, eg moving the workorder tile up next to the properties tile, to better focus on the things we want to see.The same behavior is also relevant when searching for data via Charts and Canvas
When selecting the property that will be used for the x-axis in the graph, the list is not searchable. For more complex Views the list of properties can be long (100+), which makes a non-searchable non-sorted list relatively hard to work with
The closest to that solution as for now would be adding a timeseries on the canvas, then you have an option to Open in charts and then choose a new one or existing one. This is too cumbersome.See picture below of suggested solution directly in canvas:
If document names is "clickable" it connects to itself. This makes it unorganised, especially if you want connection lines to other multiple other documents. A blocker should be in place to ensure that connection lines do not connect the document to itself.
I am submitting this idea on behalf of LyondellBasell, and it consists on the ability to search for parts of a name, whether that is a filename, equipment name, asset, etc. One scenario is that an end user is interested in a particular subsystem (ie a flow loop), so the user would search for the number of that subsystem expecting all elements of that subsystem to be returned. For example, searching for “28806” would return:TI-28806 TI28806 TE-28806 PI28806 HY28806 HY-28806
When you click on a comment in the comment panel you should be moved to wherever in that canvas the comment is located. Imagine yourself working in a huge canvas with your team, multiple comments, and you need to look through what not to find that exact comment box... Should be both automatically moved to that comment, and also as a choice in the three dots "go to comments"
There is no option to hide connecting lines in canvas between documents. This can lead to overcrowded canvas layout and difficult to see what lines is connected where. Recommend to make an option to hide individual lines to ensure a better visual layout.
It should be possible to enlarge windows in canvas in vertical direction
The OPCUA extractor can only write time series instances to a single target space per deployment, even when the underlying OPCUA server contains data from different sources, governance domains or folders which should be handled individually. The only practical workaround is to run many OPCUA extractor instances against the same OPCUA server/hub, each with different tag filters and a different target space, which is hard to scale and operate. [Governance & spaces; Multi-space limitation]I would like the OPCUA extractor to support multiple target spaces from a single deployment, where the space is selected per time series based on configurable filters. Typical examples would be routing by tag name prefix or pattern (for example, ABB* to space site-abb, VAL* to space site-val), or by attributes / metadata / folder structure mapped to specific spaces. [Filter-based routing idea; Enterprise scaling concern]. This capability would avoid both the operational overhead of many parallel extractor instances.This is strongly related to the need of the same functionality for PI Cognite Hub
Recently a change was implemented to make the connection lines between documents on a Canvas to be orthagonal and to combine and flow between documents.There are some Canvases where I would like to be able to see a direct straight line connection between my references to quickly locate interconnectivity.My product idea: add a toggle to allow me to choose between orthagonal connectors or straight line direct connector.
Currently, the PI-AF Extractor works as a forward synchronization tool that reads the PI-AF structure and sends it to Cognite Data Fusion (CDF).The extractor correctly handles: Insertions: New elements created in PI-AF are detected and created in CDF. Modifications: Updates to elements or attributes (such as renaming elements or moving branches) are synchronized and reflected in CDF. However, when elements or branches are deleted in PI-AF, the corresponding objects remain in CDF, and there is no indicator that the element no longer exists in the source system. This behavior is currently intentional to prevent accidental or irreversible data loss in CDF.While this approach protects data in CDF, it creates challenges for downstream processes. Without any indication that an object was deleted in the source system, it becomes difficult to automate workflows that depend on identifying obsolete or removed elements.Requested capabilityA mechanism to identify when an element has been deleted from PI-AF, without necessarily deleting the corresponding object in CDF.For example, the extractor could mark such objects with a metadata field, tag, or status indicating that the element was removed from the source system.This would allow users to implement their own logic for handling these cases (e.g., archiving, cleaning up, or removing the objects in downstream processes) while maintaining the current protection against unintended data deletion.Use caseOrganizations that synchronize large PI-AF hierarchies to CDF often rely on automated pipelines and asset-based workflows. When assets or branches are removed from PI-AF, there is currently no straightforward way to detect this change in CDF, making it difficult to automate lifecycle management of these elements.Providing an indicator that an element was deleted in the source system would enable users to build controlled automation around these events. Regards Daniel
Hi,When retrieving datapoints in a Pandas dataframe through the Python SDK using the retrieve_dataframe() method the external_id of the timeseries is set as the resulting column header. The external_id is not very readable, and it would be nice if we could specifiy which attribute to be used as column header If there was a parameter in the function called e.g. use_timeseries_name [boolean] we could set the name of the timeseries as column header rather than external ID. This is more readable
Hello Cognite Support Team, I'm working with the Cognite Data Points API and would like to request a feature enhancement regarding aggregation functions. Current Situation: When querying time series data with a specific granularity (e.g., "1d" for daily), the available aggregates (Sum, Average, Count, Interpolation, StepInterpolation, etc.) don't directly provide the first or last actual data point within each granularity period. Feature Request: Could you add first and last aggregate functions that would:first: Return the earliest data point (by timestamp) within each granularity period last: Return the latest data point (by timestamp) within each granularity period Use Case Example: For a time series with granularity: "1d" and aggregates: ["first"], the API would return the first recorded value for each day (e.g., the value at 00:00 or the earliest available timestamp that day). Similarly, aggregates: ["last"] would return the last recorded value for each day (e.g., the value at 23:00 or the latest available timestamp). Current Workaround: Currently, we're fetching hourly data (granularity: "1h") and then manually filtering/grouping to extract the first or last value per day, which is less efficient for large datasets. Question: Are there any plans to add native first and last aggregate functions? If this feature is already available through a different approach, I'd appreciate guidance on the best practice.
Sometimes workflows fail and I really need to know as soon as possible when they do. I think being able to send a well structured email with links to the workflow, the task that failed, and the error from the task would be the perfect addition to workflows. Is this feature in the roadmap and what can I do to influence the decision to prioritize it?
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK