Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
This issue was reported by a user at SLB where they encountered an unexpected behavior when sorting the runHistory field in GraphQL. It appears that the sorting might not be applied as expected when using first: 1 in the query.type Workflow { runHistory: [WorkflowRun]}type WorkflowRun { endTime: Timestamp}query MyQuery { listWorkflow { items { runHistory(sort: {endTime: DESC}, first: 1) { items { endTime } } } }}By running the above query, he expects the first WorkflowRun in runHistory to be the most recent. If the user changes first: 1 in the query to first: 10 then he's able to see WorkflowRun in descending order.
Hello, I have trying to use the ODBC extractor to pull data from Snowflake, I could do that by following instructions here https://docs.cognite.com/cdf/integration/guides/extraction/configuration/db/#-snowflake However, this requires me to put passwords in YAML config. Is there anyway to use DSN from ODBC instead? I see that connection-string parameter is not supported when I choose type as snowflake. Thanks!
Problem Description: Cognite File Extractor for Sharepoint is not able to recursively retrieve documents from a Sharepoint site where site and sub-sites, e.g. Document Control System Alternatives explored: Provide Sharepoint URL as mentioned in documentation for site (Sharepoint Online | Cognite Documentation) Results: no errors, however, no documents are loaded Evaluation: recursion does not occur from site-level down to the document or folder level where Sharepoint stores documents Provide document library and folder / sub-folder Results: working, however, sustainability not ideal Evaluation: will require potentially >100 URLs to be configured and kept up-to-date in case of new document libraries or folders created or deleted Recommended enhancement Recursion from root node (Sharepoint site). E.g. if site URL is provided recursion should go through each sub-site (if present) to document libraries and sub-folders to retrieve documents This should be treated as high-priority. Item 2B above can be seen as a work-around but maintaining Sharepoint URLs in config up-to-date will require coordination/sync with source – likely through custom
Problem: If one or more of the source-time series are missing data-points they will also be missing in the synthetic time series output.Would it be possible to add an option to supply a default “not available” value for missing data-points which could then be used by the API to calculate a partial result?Source:
I understand that updates older than 7 days may be discarded from the endpoint, which is perfectly fine. However, I’m wondering if it’s possible to determine when a cursor is older than the given limit, since updates older than 7 days are discarded. Based on my understanding of your Subscription API, it would simply continue iterating over the oldest available updates on the endpoint, which could result in missing important updates that occurred between the time of my cursor and the current oldest update. For the Subscription API to be fully functional, I believe it's essential to either receive a notification or an exception when using an expired cursor, be able to determine the age of a cursor, or somehow be assured that iterating from a cursor won't lead to lost data. Knowing the age of the cursor would be preferable, as it would allow us to gauge how much margin we have, but it should also be possible to receive a notification if an expired cursor is being used. Is this currently possible, or are there any plans to implement such a feature?
Once in a while we make changes to our telemetry setup in such a way that certain time series get deprecated. This happens when we change control systems, or make changes in our telemetry pipeline, or even discontinue operations at a location. We want to have the “deprecated” time series available in CDF, but it would be useful for us if these would not by default appear in search, or in the asset hierarchy, or through grafana and other solutions. It would be useful if there was a way to mark a time series as deprecated or as only containing historical data - and that in order to have the time series returned in search or list operations, the user would have to make an active choice to include those time series. My suggestion is an “isDeprecated” property on each time series. There are other solutions to the problem, but these all seem to make it more difficult for the user than simply not showing deprecated time series by default:Metadata field - isDeprecated (boolean) Move deprecated time series to a different data set Separate data models for “active” and “non-active” time series
Posted for inquiry here: Cognite Hub but I don’t believe the feature exists currently. We need a way of routing time_series and datapoint items from within a Hosted Extractor configuration to different, or possibly multiple datasets. Question copied from referenced posting here, for product feature descriptiveness: Within our implementation we have an existing Hosted Extractor reading data from an IoT Hub that contains multiple sites worth of data. Our Hosted Extractor Mapping Template filters for events that have a particular deviceId on it, representative of the location these events are coming from. In effort of ingesting another site-location datafeed, I wanted to extend the template with an ELSE IF condition that has the mapping rules for the other location, which are almost identical to the first except for the target datasets, which I’ve come to realize is set in the Sink section of the Extractor Configuration. The net result here is needing to create redundant Hosted Extractor configurations that change only a filter, rather than having a cascading ELSE IF ruleset that applies to the full stream. For example, this pseudo-template for our existing hosted extractor configuration for one site: if (context.messageAnnotations.`iothub-connection-device-id` == "SITE_A") { input.map(record_unpack => { "type": "raw_row", "table": "tb_iot_test", "database": "db_iot_testing", "key": concat("TS_CO_", record_unpack.NAME), "deviceId": context.messageAnnotations.`iothub-connection-device-id`, "TAG":record_unpack.NAME, "IP_INPUT_VALUE":record_unpack.IP_INPUT_VALUE, "IP_INPUT_TIME":record_unpack.IP_INPUT_TIME, "IP_INPUT_QUALITY":record_unpack.IP_INPUT_QUALITY }).filter(item => item.IP_INPUT_QUALITY == "Good").flatmap( item => [ { "type": "time_series", "name": item.TAG, "externalId": item.key, "metadata": { "IP21_DEVICE_ID": item.deviceId }, "isString": false, "isStep": false }, { "type": "datapoint", "externalId": item.key, "value": try_float(trim_whitespace(item.IP_INPUT_VALUE), null), "timestamp": to_unix_timestamp(item.IP_INPUT_TIME,"%Y-%m-%dT%H:%M:%S.%6fZ"), } ] ) } else { [] } Ideally I want to add an ELSE IF condition where the only change to the outcome is the iothub-connection-device-id equals SITE_B in this case. While the record_unpack step is able to route different locations to different staging tables, I was unable to find a means to specify an alternative target to route the time_series and datapoint items in the outside of creating a whole new configuration and having a different Sink setting for this extractor. We have multiple sites all sharing one IoT Hub. IoT Hub has a limit to the number of consumers that can be connected, in this case larger than our number of sites. We handle routing in the Consumer logic of this stream however Cognite appears not to provide us with this mechanism in the templating language, instead specifying it in the sink. This presents a configuration-limit issue for us with respect to our source data, and the constraints presented by the Cognite Hosted Extractor with specifying the target. Am I missing something and is there a way to achieve what we’re looking for, or will this require a secondary hosted extractor configuration to be implemented? If we must go down the path of redundant hosted extractors with modified logic and a different Sink, we’re going to hit an Azure IoT limit prior to our full scale out, and would like to understand if this is a feature that can be provided, or if we should be planning otherwise.
Hello,We have confirmed that there is a restriction in the data model View (Container) definition that prevents the use of Japanese (double-byte) characters in property names.This limitation also applies to the Records/Streams API. Currently, our database column names and CSV file headers include Japanese characters, which means we are unable to use the existing names directly when registering these datasets as Records.As a result, we are now required to define a mapping between the Japanese column names and alphanumeric property names, and to establish, standardize, and manage a naming convention. However, this process is labor-intensive and may lead to confusion and operational burden in data utilization.Therefore, we would like to kindly ask if you could consider supporting the use of Japanese (double-byte) characters in property names within the data model View definitions and the Records API.
It would be nice if CDF warned users if they are uploading a document that already exists within CDF. It should still allow them to upload it if they wish, but a warning might help to cut down on duplicate documents. This is specifically an issue with Canvas, where users can drag documents into their canvas. It is less of an issue when using file upload pipelines, because duplicated can be mitigated in those scenarios.
On behalf of Celanese InField super users,I would like the ability to view images that have been attached to a line item within a checklist while using the checklist overview page. This would make the review process a lot easier then needing to navigate to the specific checklist on the active checklists tab. Currently the checklists only show a grey dialogue box indicating that an image has been attached to a line item but there is no way to view the image. @Kristoffer Knudsen
When want to be able to go from 3D Preview to 3D Full View in the new Search.Right now, when using the preview I can only click on the selected equipment. I want to be able to select different equipment and navigate in the 3D Model.
Despite multiple refreshes, the DM interface in CDF only shows two lines added to the Site. However, using Pygen, the user can see that all lines are attached to the Site.The user is suggesting another tool to use as a 'general data explorer' for data in models because the technical person can use Pygen, but other non-technical people at the organization do need to occasionally see what is in them.
I think this is more of a bug than a product idea.In Data Explorer, all metadata keys and values and all attribute values are show with lower case. Metadata fields like this, become less readable when everything is transformed to lower case:Also fields like “BagnG1” get transformed to “bagng1”, which also hurts readability.It is probably most problematic for units, where units like “Mm” and “mm” become the same.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK