Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
We use push-down filtering to get our relevant data from CDF into Power BI. We filter on the external ID property, since this is constant across our environments in CDF. Here is an example of that this looks like: Source = Cognite.Contents(cdf_env & "/Timeseries/?$filter=ExternalId eq 'an_external_id'", null) This works great for Time Series, but doing the same thing for Sequemces leads to an error:DataSource.Error: Microsoft.Mashup.Engine1.Library.Resources.HttpResource: Request failed: OData Version: 3 and 4, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)OData Version: 4, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)OData Version: 3, Error: The remote server returned an error: (400) Bad Request. (The provided filter was invalid.)Details: DataSourceKind=Cognite DataSourcePath={"project":"akerbp-dev\/Sequences?$filter=ExternalId eq 'val_alarms_top_10_weekly'"} Push-down filtering on the external id seems to be supported based on the information here; Am I doing something wrong? Is there maybe a bug somewhere? Filtering on the Name attribute does work, and it keeps us going for the moment, but I don’t like that the Name is not a unique identifier of the object. Thanks for any help and advice! :)
We have more than once now discovered our Time Series data which is currently being extracted using the Event Hub Hosted Extractor within the platform, has stopped extracting data for reasons we cannot determine. To make hosted extractors production-worthy they must be able to notify platform owners when they fail, which they currently do not, and instead keep polling with bad session or connection info. Here’s what this looks like in the logs provided by the hosted extractor Insight view:For starters, this shouldn’t have 0 datapoints per hour. That’s a potential opportunity for threshold alerting.Expanding the details, these repeated failures are nondescript and do not indicate why things failed. Discovering these issues reactively after platform users have asked why data is not trustable (things are missing), will present challenges for platform trust. We are not yet into production with our platform and must find a solution to this before the data can be used by our teams.
Currently, the data model and data model instance actions are limited to read and write capabilities.To enhance clarity and delineate responsibilities within CDF groups more effectively, I propose dividing the existing configuration into distinct categories: read, write/update, and delete.Another enhancement is to disallow the deletion of containers that have associated views. The same principle applies to views; they cannot be deleted if other components (views or data models) are referencing them.
Hi,Is it feasible to incorporate a new feature into the SAP extractor that facilitates reporting? For instance, scheduling the SAP extractor on a daily basis to append data to the raw database, even if the primary key remains the same for subsequent days, and ensuring that it does not overwrite existing data.Presently, we've implemented a workaround by including a date field in the key and utilizing our own extractor to accomplish this. Can this functionality be achieved using the Cognite SAP extractor?Also same for DB extractor Support ticket raised is https://cognite.zendesk.com/hc/en-us/requests/10724?page=1. Thanks,Shubha
Hi Team,In SAP Cognite Extractor, Till now we were able to use multiple existing columns in the key. Now, the requirement is like I want add some dynamic field in the key Like Usecase:Key : [Key1, Key2]Proposed Idea: I want append the date for all the keys so that I can keep track of all the records changes in every day.Key: [Key1, Key2, <Currentdatetime>] Thanks in advance,Sai Harshita
Hi,I noticed that the current setup with the Dockerfile cognite/db-extractor-base runs the extractor as the root user within the container.In our use case, we have to heavily modify the Dockerfile to change the setup so that it runs as a non-root user. It would be highly beneficial if the Docker image either accepted an argument to run as a non-root user or, preferably, ran the script as a non-root user by default. Running containers as the root user is not best practice for containerized development and is often restricted in organizational environments.I'm not sure if there is an intentional reason for running as root, or if this aspect hasn't been considered yet at Cognite.If you are open to changing this, I can assist and provide our current solution. I believe this adjustment would improve security and compatibility for multiple users.Best regards,Matias Ramsland,matias.ramsland@akerbp.com
It is not possible to aggregate edges, making it impossible to create queries such as: How many reporting units are at the Reporting Site CLK? The aggregation API of Cognite has several limitations; we cannot group by date. More complex filters do not work either. Aggregations are limited to primitive fields in the GraphQL API. Using the GraphQL API, we cannot filter an object type based on its edge. For example, querying the reporting sites associated with a specific reporting unit is not possible. Pagination parameters are unstable. For instance, the "has previous page" parameter does not work. The limit of 10,000 items for subqueries in the SDK Query method makes pagination extremely costly. We believe this limit should only apply to the select output. It should be possible to filter and sort by the createdTime and lastUpdatedTime columns in FDM. There is a lack of search capabilities using wildcards or case-insensitive searches with multiple parameters. The Cognite API lacks transaction capabilities. FDM documentation should have more examples.
The Cognite API lacks transaction capabilities.
There is a lack of search capabilities using wildcards or case-insensitive searches with multiple parameters.
It should be possible to filter and sort by the createdTime and lastUpdatedTime columns in FDM.
The limit of 10,000 items for subqueries in the SDK Query method makes pagination extremely costly. We believe this limit should only apply to the select output.
Pagination parameters are unstable. For instance, the "has previous page" parameter does not work.
The aggregation API of Cognite has several limitations; we cannot group by date. More complex filters do not work either. Aggregations are limited to primitive fields in the GraphQL API.
It is not possible to aggregate edges, making it impossible to create queries such as: How many reporting units are at the Reporting Site CLK?
When creating interactive engineering diagrams on the Cognite web UI, we have the option to select between standard and advanced model.It would be beneficial to have the possibility to combine this with customized entity matching and be able to choose from available feature types (such as "simple", "bigram" etc.).This option would be just as, if not more, useful as the advanced detection model.
We use CDF to store model result that in essence is a dataframe with datetime index and multiple columns for each scenario/attribute/timeseries. This could be a weather forecast model or a hydropower optimization model etc. We wish to store the model results as a sequence as the values are related to the same model run and would require a significant amount of timeseries (given that the models are run several times a day and produce thousands of timeseries for each run). We would therefore appriciate if sequences would support datetime index, and not just int. This would reduce the code we need to wrap around the cdf client in order to store our dataframes.
Oil and gas Industry handle different kind of curves, one of them IPR that curve correspond in CDF to sequence. we must load 500 IPR curves one by every well, but I have found a stopper, when we use ODBC extractor in the sequence destination, must put in external_Id directly in the yaml configuration file (image belove) , every time that we load an IPR curve. It will be possible that external_id works like time series, taking data from column? And loading all the sequences with the same file or query, happen the same for oracle queries from database and excel files.@Sebastian Wolter @Aditya Kotiyal
Hi, sure this has been raised before, but would it be too much to ask to enable standard logging calls in Python when using Cognite Functions. While I am fully aware that it works to use print statements instead, it is very contrary to expectations for most developers that logging would not work, also for your own consultants (various Cognite consultants on our projects have themselves converted print statements to logger.info calls in the name of “best practice”, only to be surprised that nothing gets printed out). Moreover it gives the impression that the product is not quite “working” or that there is “something weird going on”. I would consider it similar to a cosmetic UX defect which is easy to work around: sometimes hard to prioritise, but affects the overall impression of the product when things do not work as expected. Maybe it is time to give it some TLC.
Oxy is requesting for the ability to link Charts to a Infield in a way that based on the historical data available in a time series in charts, the product is capable of making the threshold recommendation for a measurement in Infield. E.g., based on the engine’s measured temperatures over the last year in Charts, the normalized expected temperature is between 90 and 110 °C. So 90 will automatically populate as the lower limit in Infield and 110 as the maximum limit for this motor.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK