Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
hi,When I want to apply different rules to multiple objects, I don't know of an efficient way to do it, but I first create the rules once on ONE object, then duplicate them and assign them to each object (erasing the unrelated ones). So I am deleting a lot of rules, but the “Do you really want to delete” button is almost hidden and hard to press.I'd be happy to have it checked and fix the position.regards,
This request is coming from National Grid. BackgroundThe current behavior on Industrial Canvas is that only one person can edit at a time. This is very limiting when a team wants to do a collaborative (e.g: over a screen sharing session) troubleshooting, root cause analysis or QC of documents. RequirementAdd capability to support concurrent modifications to industrial canvas so that multiple users can work the same Canvas at the same time. Everyone who has write access to the industrial canvas should be able to edit simultaneously unless the Canvas is explicitly locked for editing by someone.
hi, i like sticky notes parts as it can be easily added, auto-colored, and letter size modified automatically as the notes size changed, i think it’s superior to transparent text objects in most cases.Except, the shape is only square. i want to change them rectangle because we tend to add texts which is several lines long in it.it might be trade-off to auto letter size change, but the changable shape of frame is more useful i guess. i hope some of you will consider and share this.
default: the upper-right among all the attachmentsmy idea: close to where mouse is on
When adding time series data to industrial canvas, providing the name of the tag and the description field in the time series tile window would clarify what tag is being shown. Submitted at the request of the Cognite project team.
When adding time series data to industrial canvas, providing the name of the tag and the description field in the time series preview window would make selecting the correct reading more effective. Currently, there is often not enough info in the time series tag itself to know which data to select. !--scriptorstartfragment--> Submitted at the request of the Cognite project team. !--scriptorendfragment-->
With the recent update, it is now possible to add tables to Canvas. Currently, I believe you can only input text into the tables,but is it possible to embed asset information, timelines, and files into the tables? If this becomes possible, it would allow for a more organized dashboard in Canvas, greatly expanding its usability.
A customer from Oxy reported this.Issue describedIt seems like when searching for a documentation on Maintain it only shows up documents when search through a unique system generated ID rather than the documentation name or drawing no. etcSo basically the user want the search to be searchable through document metadata which they find more meaningful and easier. This behavior is reproducible and seems to align with how search currently functions on Maintain. However, the customer would like the search experience on Maintain to match that of CDF.
A customer from Oxy has highlighted the importance of search functionality when dealing with special characters in Industrial Canvas.Please see the use case below for further clarification:An asset named '4011-1 1/2'-G4J-014' exists in CDF, but when searching with the term '1 1/2'-G4J-014', the asset does not appear in the results. However, using 'G4J-014' as the search term correctly displays the asset as the top result.
A customer from Oxy has reported that the UI in 'Browse 3D' should either open the selected file in a new tab or, upon clicking 'search results' in the top-left corner, navigate directly to the selected 3D preview without losing it, eliminating the need to restart the search.
The customer (BBraun - @gary.goh@bbraun.com) has requested the enablement of the feature that allows Work Orders/Activities from Cognite Maintain to be added to CDF Industrial Canvas. Simply, for any/every annotated Asset on the CDF Canvas the customer would like to bring in the associated Work Order/Activity from Maintain.
On behalf of Celanese InField super users: The users have expressed interests in being able to see both observations and any comments/notes that have been linked to an asset on the asset overview page. This would make the data easily accessible rather than needing to open specific checklists. @Kristoffer Knudsen
A t the moment, CDF Canvas lines for relations ( from P&ID to other document/ assets/ charts/ time series)are only displayed in purple.This is an issue when working on big P&ID, and doing RCAs, all lines are purple & become mixed & hard to track - adding complexity to something that is meant to simplify work.Clients are asking for simplification tools in displaying relation lines , such as:-The ability to change each line’s color/ thickness/ type, or it would be auto-changes for every new relationship added from P&ID-Legend tab where one could select what each color represents from relationship lines & is possible to toggle, if one wishes to hide only some relationship lines related to some assets/ time series.I am not sure if similar idea is published on the hub. but if not, kindly consider it .
Hi. I’d like to see more aggregation functions on the grafana data source. Currently only average, interpolation and step_interpolation are allowed. Any chance to implement support for Sum, Count, etc.? Error: count is not one of the valid aggregates: AVERAGE, INTERPOLATION, STEP_INTERPOLATIONhttps://docs.cognite.com/cdf/dashboards/guides/grafana/timeseries
Problem Statement:Synthetic Time Series in CDF are dynamically calculated based on expressions, but lack persistent identifiers. This limits usability when users need:Persistent IDs for discovery and access. Simplified queries via API, SDK. Centralized definition of expressions as part of the data model, avoiding the need for each backend to redefine them.Suggested Approach:Introduce a Synthetic Time Series Definition object that:Allows defining synthetic series with a persistent external_id. Stores metadata like expressions, descriptions, and units. Enables dynamic evaluation without requiring data storage. Supports defining expressions as part of a model, enabling reuse across different systems without requiring redundant definitions in backends.Benefits:Usability: Persistent identifiers for easier access and queries. Consistency: Eliminates repetitive expression definitions. Scalability: Centralized expression definitions simplify updates and maintenance.
This is closely related to existing suggestion (Cognite Hub) - details specific to our application were already discussed with @Jason Dressel In our data model, we already make use of synthetic expressions and have created a wrapper view for timeseries that can either refer to a regular timeseries instance or a synthetic expression.In adopting CDM, we cannot really represent the same thing using CogniteTimeseries, even if we extend the model, as this would lead to us creating an empty time series for every synthetic timeseries that we want to create. Suggestion is to extend the existing CogniteTimeseries in CDM to have some support for storing and accessing synthetic timeseries expressions.
This issue was reported by a user at SLB where they encountered an unexpected behavior when sorting the runHistory field in GraphQL. It appears that the sorting might not be applied as expected when using first: 1 in the query.type Workflow { runHistory: [WorkflowRun]}type WorkflowRun { endTime: Timestamp}query MyQuery { listWorkflow { items { runHistory(sort: {endTime: DESC}, first: 1) { items { endTime } } } }}By running the above query, he expects the first WorkflowRun in runHistory to be the most recent. If the user changes first: 1 in the query to first: 10 then he's able to see WorkflowRun in descending order.
We would very much like for a new column to be added to this view, “extractor type”. Currently there are no indicators as to what type of extractor it is. For the developers working on the extractor this is not an issue because they know, but for others that just want an overview this is somewhat more difficult as we would need to click in on every single one of them to find out. Sometimes we would like to just see all REST extractors, or all MQTT extractors and so on. Another point is the throughput: “0 datapoints p/hr”What does the “p” stand for, is it “0 datapoints per hour” because if so the p is redundant as the standard use of “/” already signifies it. Also the SI unit for hour is h not hr. E.g.: km/h (km/t norwegian), m/s etc.
Hello, I have trying to use the ODBC extractor to pull data from Snowflake, I could do that by following instructions here https://docs.cognite.com/cdf/integration/guides/extraction/configuration/db/#-snowflake However, this requires me to put passwords in YAML config. Is there anyway to use DSN from ODBC instead? I see that connection-string parameter is not supported when I choose type as snowflake. Thanks!
Problem Description: Cognite File Extractor for Sharepoint is not able to recursively retrieve documents from a Sharepoint site where site and sub-sites, e.g. Document Control System Alternatives explored: Provide Sharepoint URL as mentioned in documentation for site (Sharepoint Online | Cognite Documentation) Results: no errors, however, no documents are loaded Evaluation: recursion does not occur from site-level down to the document or folder level where Sharepoint stores documents Provide document library and folder / sub-folder Results: working, however, sustainability not ideal Evaluation: will require potentially >100 URLs to be configured and kept up-to-date in case of new document libraries or folders created or deleted Recommended enhancement Recursion from root node (Sharepoint site). E.g. if site URL is provided recursion should go through each sub-site (if present) to document libraries and sub-folders to retrieve documents This should be treated as high-priority. Item 2B above can be seen as a work-around but maintaining Sharepoint URLs in config up-to-date will require coordination/sync with source – likely through custom
Problem: If one or more of the source-time series are missing data-points they will also be missing in the synthetic time series output.Would it be possible to add an option to supply a default “not available” value for missing data-points which could then be used by the API to calculate a partial result?Source:
I understand that updates older than 7 days may be discarded from the endpoint, which is perfectly fine. However, I’m wondering if it’s possible to determine when a cursor is older than the given limit, since updates older than 7 days are discarded. Based on my understanding of your Subscription API, it would simply continue iterating over the oldest available updates on the endpoint, which could result in missing important updates that occurred between the time of my cursor and the current oldest update. For the Subscription API to be fully functional, I believe it's essential to either receive a notification or an exception when using an expired cursor, be able to determine the age of a cursor, or somehow be assured that iterating from a cursor won't lead to lost data. Knowing the age of the cursor would be preferable, as it would allow us to gauge how much margin we have, but it should also be possible to receive a notification if an expired cursor is being used. Is this currently possible, or are there any plans to implement such a feature?
Once in a while we make changes to our telemetry setup in such a way that certain time series get deprecated. This happens when we change control systems, or make changes in our telemetry pipeline, or even discontinue operations at a location. We want to have the “deprecated” time series available in CDF, but it would be useful for us if these would not by default appear in search, or in the asset hierarchy, or through grafana and other solutions. It would be useful if there was a way to mark a time series as deprecated or as only containing historical data - and that in order to have the time series returned in search or list operations, the user would have to make an active choice to include those time series. My suggestion is an “isDeprecated” property on each time series. There are other solutions to the problem, but these all seem to make it more difficult for the user than simply not showing deprecated time series by default:Metadata field - isDeprecated (boolean) Move deprecated time series to a different data set Separate data models for “active” and “non-active” time series
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK