Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
When adding time series data to industrial canvas, providing the name of the tag and the description field in the time series tile window would clarify what tag is being shown. Submitted at the request of the Cognite project team.
When adding time series data to industrial canvas, providing the name of the tag and the description field in the time series preview window would make selecting the correct reading more effective. Currently, there is often not enough info in the time series tag itself to know which data to select. !--scriptorstartfragment--> Submitted at the request of the Cognite project team. !--scriptorendfragment-->
With the recent update, it is now possible to add tables to Canvas. Currently, I believe you can only input text into the tables,but is it possible to embed asset information, timelines, and files into the tables? If this becomes possible, it would allow for a more organized dashboard in Canvas, greatly expanding its usability.
A customer from Oxy reported this.Issue describedIt seems like when searching for a documentation on Maintain it only shows up documents when search through a unique system generated ID rather than the documentation name or drawing no. etcSo basically the user want the search to be searchable through document metadata which they find more meaningful and easier. This behavior is reproducible and seems to align with how search currently functions on Maintain. However, the customer would like the search experience on Maintain to match that of CDF.
A customer from Oxy has highlighted the importance of search functionality when dealing with special characters in Industrial Canvas.Please see the use case below for further clarification:An asset named '4011-1 1/2'-G4J-014' exists in CDF, but when searching with the term '1 1/2'-G4J-014', the asset does not appear in the results. However, using 'G4J-014' as the search term correctly displays the asset as the top result.
A customer from Oxy has reported that the UI in 'Browse 3D' should either open the selected file in a new tab or, upon clicking 'search results' in the top-left corner, navigate directly to the selected 3D preview without losing it, eliminating the need to restart the search.
The customer (BBraun - @gary.goh@bbraun.com) has requested the enablement of the feature that allows Work Orders/Activities from Cognite Maintain to be added to CDF Industrial Canvas. Simply, for any/every annotated Asset on the CDF Canvas the customer would like to bring in the associated Work Order/Activity from Maintain.
On behalf of Celanese InField super users: The users have expressed interests in being able to see both observations and any comments/notes that have been linked to an asset on the asset overview page. This would make the data easily accessible rather than needing to open specific checklists. @Kristoffer Knudsen
A t the moment, CDF Canvas lines for relations ( from P&ID to other document/ assets/ charts/ time series)are only displayed in purple.This is an issue when working on big P&ID, and doing RCAs, all lines are purple & become mixed & hard to track - adding complexity to something that is meant to simplify work.Clients are asking for simplification tools in displaying relation lines , such as:-The ability to change each line’s color/ thickness/ type, or it would be auto-changes for every new relationship added from P&ID-Legend tab where one could select what each color represents from relationship lines & is possible to toggle, if one wishes to hide only some relationship lines related to some assets/ time series.I am not sure if similar idea is published on the hub. but if not, kindly consider it .
Hi. I’d like to see more aggregation functions on the grafana data source. Currently only average, interpolation and step_interpolation are allowed. Any chance to implement support for Sum, Count, etc.? Error: count is not one of the valid aggregates: AVERAGE, INTERPOLATION, STEP_INTERPOLATIONhttps://docs.cognite.com/cdf/dashboards/guides/grafana/timeseries
Problem Statement:Synthetic Time Series in CDF are dynamically calculated based on expressions, but lack persistent identifiers. This limits usability when users need:Persistent IDs for discovery and access. Simplified queries via API, SDK. Centralized definition of expressions as part of the data model, avoiding the need for each backend to redefine them.Suggested Approach:Introduce a Synthetic Time Series Definition object that:Allows defining synthetic series with a persistent external_id. Stores metadata like expressions, descriptions, and units. Enables dynamic evaluation without requiring data storage. Supports defining expressions as part of a model, enabling reuse across different systems without requiring redundant definitions in backends.Benefits:Usability: Persistent identifiers for easier access and queries. Consistency: Eliminates repetitive expression definitions. Scalability: Centralized expression definitions simplify updates and maintenance.
This is closely related to existing suggestion (Cognite Hub) - details specific to our application were already discussed with @Jason Dressel In our data model, we already make use of synthetic expressions and have created a wrapper view for timeseries that can either refer to a regular timeseries instance or a synthetic expression.In adopting CDM, we cannot really represent the same thing using CogniteTimeseries, even if we extend the model, as this would lead to us creating an empty time series for every synthetic timeseries that we want to create. Suggestion is to extend the existing CogniteTimeseries in CDM to have some support for storing and accessing synthetic timeseries expressions.
This issue was reported by a user at SLB where they encountered an unexpected behavior when sorting the runHistory field in GraphQL. It appears that the sorting might not be applied as expected when using first: 1 in the query.type Workflow { runHistory: [WorkflowRun]}type WorkflowRun { endTime: Timestamp}query MyQuery { listWorkflow { items { runHistory(sort: {endTime: DESC}, first: 1) { items { endTime } } } }}By running the above query, he expects the first WorkflowRun in runHistory to be the most recent. If the user changes first: 1 in the query to first: 10 then he's able to see WorkflowRun in descending order.
Hello, I have trying to use the ODBC extractor to pull data from Snowflake, I could do that by following instructions here https://docs.cognite.com/cdf/integration/guides/extraction/configuration/db/#-snowflake However, this requires me to put passwords in YAML config. Is there anyway to use DSN from ODBC instead? I see that connection-string parameter is not supported when I choose type as snowflake. Thanks!
Problem Description: Cognite File Extractor for Sharepoint is not able to recursively retrieve documents from a Sharepoint site where site and sub-sites, e.g. Document Control System Alternatives explored: Provide Sharepoint URL as mentioned in documentation for site (Sharepoint Online | Cognite Documentation) Results: no errors, however, no documents are loaded Evaluation: recursion does not occur from site-level down to the document or folder level where Sharepoint stores documents Provide document library and folder / sub-folder Results: working, however, sustainability not ideal Evaluation: will require potentially >100 URLs to be configured and kept up-to-date in case of new document libraries or folders created or deleted Recommended enhancement Recursion from root node (Sharepoint site). E.g. if site URL is provided recursion should go through each sub-site (if present) to document libraries and sub-folders to retrieve documents This should be treated as high-priority. Item 2B above can be seen as a work-around but maintaining Sharepoint URLs in config up-to-date will require coordination/sync with source – likely through custom
Problem: If one or more of the source-time series are missing data-points they will also be missing in the synthetic time series output.Would it be possible to add an option to supply a default “not available” value for missing data-points which could then be used by the API to calculate a partial result?Source:
I understand that updates older than 7 days may be discarded from the endpoint, which is perfectly fine. However, I’m wondering if it’s possible to determine when a cursor is older than the given limit, since updates older than 7 days are discarded. Based on my understanding of your Subscription API, it would simply continue iterating over the oldest available updates on the endpoint, which could result in missing important updates that occurred between the time of my cursor and the current oldest update. For the Subscription API to be fully functional, I believe it's essential to either receive a notification or an exception when using an expired cursor, be able to determine the age of a cursor, or somehow be assured that iterating from a cursor won't lead to lost data. Knowing the age of the cursor would be preferable, as it would allow us to gauge how much margin we have, but it should also be possible to receive a notification if an expired cursor is being used. Is this currently possible, or are there any plans to implement such a feature?
Once in a while we make changes to our telemetry setup in such a way that certain time series get deprecated. This happens when we change control systems, or make changes in our telemetry pipeline, or even discontinue operations at a location. We want to have the “deprecated” time series available in CDF, but it would be useful for us if these would not by default appear in search, or in the asset hierarchy, or through grafana and other solutions. It would be useful if there was a way to mark a time series as deprecated or as only containing historical data - and that in order to have the time series returned in search or list operations, the user would have to make an active choice to include those time series. My suggestion is an “isDeprecated” property on each time series. There are other solutions to the problem, but these all seem to make it more difficult for the user than simply not showing deprecated time series by default:Metadata field - isDeprecated (boolean) Move deprecated time series to a different data set Separate data models for “active” and “non-active” time series
Posted for inquiry here: Cognite Hub but I don’t believe the feature exists currently. We need a way of routing time_series and datapoint items from within a Hosted Extractor configuration to different, or possibly multiple datasets. Question copied from referenced posting here, for product feature descriptiveness: Within our implementation we have an existing Hosted Extractor reading data from an IoT Hub that contains multiple sites worth of data. Our Hosted Extractor Mapping Template filters for events that have a particular deviceId on it, representative of the location these events are coming from. In effort of ingesting another site-location datafeed, I wanted to extend the template with an ELSE IF condition that has the mapping rules for the other location, which are almost identical to the first except for the target datasets, which I’ve come to realize is set in the Sink section of the Extractor Configuration. The net result here is needing to create redundant Hosted Extractor configurations that change only a filter, rather than having a cascading ELSE IF ruleset that applies to the full stream. For example, this pseudo-template for our existing hosted extractor configuration for one site: if (context.messageAnnotations.`iothub-connection-device-id` == "SITE_A") { input.map(record_unpack => { "type": "raw_row", "table": "tb_iot_test", "database": "db_iot_testing", "key": concat("TS_CO_", record_unpack.NAME), "deviceId": context.messageAnnotations.`iothub-connection-device-id`, "TAG":record_unpack.NAME, "IP_INPUT_VALUE":record_unpack.IP_INPUT_VALUE, "IP_INPUT_TIME":record_unpack.IP_INPUT_TIME, "IP_INPUT_QUALITY":record_unpack.IP_INPUT_QUALITY }).filter(item => item.IP_INPUT_QUALITY == "Good").flatmap( item => [ { "type": "time_series", "name": item.TAG, "externalId": item.key, "metadata": { "IP21_DEVICE_ID": item.deviceId }, "isString": false, "isStep": false }, { "type": "datapoint", "externalId": item.key, "value": try_float(trim_whitespace(item.IP_INPUT_VALUE), null), "timestamp": to_unix_timestamp(item.IP_INPUT_TIME,"%Y-%m-%dT%H:%M:%S.%6fZ"), } ] ) } else { [] } Ideally I want to add an ELSE IF condition where the only change to the outcome is the iothub-connection-device-id equals SITE_B in this case. While the record_unpack step is able to route different locations to different staging tables, I was unable to find a means to specify an alternative target to route the time_series and datapoint items in the outside of creating a whole new configuration and having a different Sink setting for this extractor. We have multiple sites all sharing one IoT Hub. IoT Hub has a limit to the number of consumers that can be connected, in this case larger than our number of sites. We handle routing in the Consumer logic of this stream however Cognite appears not to provide us with this mechanism in the templating language, instead specifying it in the sink. This presents a configuration-limit issue for us with respect to our source data, and the constraints presented by the Cognite Hosted Extractor with specifying the target. Am I missing something and is there a way to achieve what we’re looking for, or will this require a secondary hosted extractor configuration to be implemented? If we must go down the path of redundant hosted extractors with modified logic and a different Sink, we’re going to hit an Azure IoT limit prior to our full scale out, and would like to understand if this is a feature that can be provided, or if we should be planning otherwise.
Hello,We have confirmed that there is a restriction in the data model View (Container) definition that prevents the use of Japanese (double-byte) characters in property names.This limitation also applies to the Records/Streams API. Currently, our database column names and CSV file headers include Japanese characters, which means we are unable to use the existing names directly when registering these datasets as Records.As a result, we are now required to define a mapping between the Japanese column names and alphanumeric property names, and to establish, standardize, and manage a naming convention. However, this process is labor-intensive and may lead to confusion and operational burden in data utilization.Therefore, we would like to kindly ask if you could consider supporting the use of Japanese (double-byte) characters in property names within the data model View definitions and the Records API.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK