Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
Hi Team,Currently, when we make any changes to a data model from the UI, there is no change log that is shown to the user before publishing.Ideally, user should be shown the changes that are going to get commit before publishing a model from the UI so that they do not inadvertently make any mistakes.Change log would be something similar we are shown before git commit and push (like additions, deletions, etc)This will improve user experience for anyone using the data modeling service from the UI.Thanks@Aditya Kotiyal
Our TA and Projects team is evaluating the 3D tool inside Cognite to be the backbone of how work is planned. I would like to get a few enhancement requests that will be helpful to make the planning process in 3D value added. One is the ability to add and move around 3D objects to a scene so planners can understand what equipment is needed or if equipment can fit in a working area. Cranes, people, scaffolding, etc. Specifically for cranes, it would be valuable to add different sizes and ability to understand the reach of the crane so they can understand what work could be done without moving it. Can see how this might pair with capability already in Maintain to see workorder in 3D.
Hi team, We’d like to suggest an improvement for the Transformations feature.Currently, in Transformations, we only have access to staging tables and not to the instances of the data model that have already been created. This leads to some redundancy and inefficiencies in transformation logic. Use case:Let’s say I load Well objects in a first transformation, applying a set of filters (e.g., valid wells, correct country, etc.). Then, in a second transformation, I want to load Wellbore objects. However, to ensure consistency, I want to join with only the wells that were already loaded into the data model — not reapply the same filtering logic I used earlier.Currently, since I can’t access the already loaded Well instances in the data model, I have to duplicate the filtering logic or rely on external references, which is error-prone and inefficient. Suggestion:Allow read access to instances of the data model from within transformations, so we can:Reference already created objects (e.g., Well) directly when creating new ones (e.g., Wellbore). Avoid duplication of logic. Ensure better consistency across related business objects.Let me know if this is technically feasible or already planned.Thanks a lot!
Hello, We are defining description of every attribute of our data objects in cdf toolkit. However these does not show anywhere. We want to be able to see these description in the Industrial tools Search UI when we expend properties of an object.Could you transform this into a feature request please? Thanks
Hello, Today the CDF Snowflake extractor only support OAuth or User/password authentification schemas: https://docs.cognite.com/cdf/integration/guides/extraction/configuration/db/#-snowflakeWe would like to be able to authenticate using private key: https://docs.snowflake.com/en/user-guide/key-pair-authIs this part of your Snowflake extractor road map?Thank you
Currently the data models page shows all data models (system - added by default by Cognite and custom). Add a filtering capability to show only the models created by users.Currently, there is only a search button but having a filter out default data models will enhance user experience as data models by Cognite have increase by a large number.Thanks
When browsing data using the Data Catalog view there’s a “Governance Status” presented on datasets: We have Datasets created with users’ having write-access to be able to facilitate uploads from Canvas. What has been identified however is that these files become searchable in the Add Data view of Canvas. When searching for data, a mechanism to prevent searching un-governed datasets would overcome concerns of people potentially adding ungoverned data to perspectives that could lead to incorrect decision making.
Summary: Get aggregates, specifically the maximum and minimum value for a given timeframe of a time series, without needing to specify the granularity.Suppose I want to get the maximum value of a time series between time A and B and the difference between A and B is 3 hours. To achieve this, I would need to specify a granularity of, say, 120 seconds and then find the maximum of the returned data points again. This additional computation can be avoided if the API allowed the user to just specify the start and stop time and get the single maximum/minimum data point in the interval.
We are using Pygen to generate Python SDK for our Data Model. Pygen has given us good set of classes and methods which are really helpful. However, descriptions added in this generated SDK are very generic and not that useful. I think the only way to add descriptions and examples are manually doing changes to each classes and methods.We are planning to use Pygen to create SDK and may generate SDK multiple times based on data model versions. If we put our own descriptions and examples manually then each time new SDK will generated we will loose the descriptions and examples.Here, descriptions should be applied to all classes, methods, arguments etc.I believe there should be way to create SDK with relevant descriptions and examples.Please suggest what needs to be done.If no approach available then please consider it as a new product idea
Under the "Add Data" button in the Charts, the user suggest renaming the field from "Add timeseries" to "Add events or timeseries" to better reflect the available options.Additionally, in the Category dropdown, options like Assets, Files, Functional Locations, and Equipment should either be removed or grayed out, as these data types cannot be added to a chart. This will help reduce confusion and make the interface more intuitive for users. Requested by: mark_wrzyszczynski@oxy.com
Hi team,We would like to request a feature that enables unit conversion directly on Sequences, similar to what is available today for Time Series. Here’s a more detailed description of the idea:Each column in a Sequence would have an associated unit (e.g., 'psi', 'm', 'kg'). When querying a Sequence, the user can specify the unit they want for each column — if omitted, the original unit is returned. Alternatively, the user can define a default unit system (e.g., SI or field units), and Cognite would automatically convert all columns to match this system.Why this would be helpful:It reduces the burden on end users to manage unit conversions manually after retrieving data. It aligns with the behavior already available in Time Series, making the API more consistent and user-friendly across data types.Let me know if this is on the roadmap or if more details are needed.Thanks!
Hi team, We’d like to have a real-time mechanism to consume new row-level data from Sequences as soon as it's ingested.Use case:Sequences are often populated continuously (e.g., depth logs data), and we need to react with low delay to new rows. Right now, the only option is polling, which isn’t efficient. Suggestion:Introduce a Change Data Capture (CDC) or subscription mechanism at the row level for Sequences to:Detect new data as it arrives. Stream updates to external systems or trigger real-time logic.Thanks for considering this enhancement!
Hello, Is it planned to have an equivalent of Pygen in another programing languages, we are very interested in Typescript? :) Thanks
I need to be able to create a checklist that spans multiple people/roles to complete so that I capture the full workflow. @Andrew Wagner @Mike Hammons
We are creating various types of templates in Infield, such as routine equipment inspection checklists, equipment checks, and plant start-up checklists that are only carried out during plant commissioning.The checklists issued from these templates are currently displayed mixed together, but if filtering can be done according to the type of template (for example, route inspections, equipment checks, start-up checklists, etc.), inspectors will find it easier to select the checklist, and it will also be easier to search for the desired checklist later when reviewing.
There is only one discipline that can be set per user in Infield, but I would like to be able to set multiple disciplines.For example, field staff set the equipment they are in charge of (Equipment A or Equipment B or Equipment C etc.) as their discipline, while their supervisors need to check all the equipment (Equipment A and Equipment B and Equipment C etc.), so they end up having multiple disciplines. At present, we change the discipline as needed for operation, but it would be nice if we could set multiple disciplines from the start to avoid the inconvenience.
The Power BI REST API Connector today only auto‑paginates responses that include a single, top‑level string cursor. However, queries against the /models/instances/query endpoint return a top‑level cursors object, which isn’t currently supported. Since this endpoint is widely used, adding native auto‑pagination for its responses would greatly simplify integrations.
Currently in our Charts instance, if I want to add additional PI-Tag data from the relevant P&IDs, by clicking on the P&IDs icon, it doesn’t automatically load the drawing. I have to select the correct category and then open the correct P&ID. This will be difficult for the end users to navigate.
Hi Team, We have a request from Aker Solutions that Transformations populating data models takes much longer than populating raw table. We request for a feature to improve the performance of transformation that are populating data on Data Models using transformation.
good morning just a silly ‘cosmetic’ observation, when you go to check capabilities looks like they are shown in the order of creation / assignment instead of in alphabetical order so when you are looking to validate a particular capability it may become a look ‘in all of them’ just to make sure it is present.If capabilities were sorted alphabetically it would be easy to find what you are looking for without the need to review all of them, specially on groups were you have several of them. thank you
I want Atlas AI to be able to automatically generate a work order based on observations I’m making that need it during my rounds. @Andrew Wagner @Mike Hammons
Due to differences between Google Cloud Storage and Azure Blob Storage Services we have somewhat different behavior for CDF on Google Cloud and CDF on Azure. The main difference is the set of characters supported in file/blob names by the two services.The Azure Blob Storage API has much stricter rules on what characters are permitted in blob names compared to Google Cloud Storage, so we URL encode all file names to ensure that we do not come into conflict with the blob naming rules in Azure Blob Storage. This encoding process is performed automatically by the Azure Storage SDK which we have used to build the Files API on Azure.More information from Microsoft about blob names: Naming and Referencing Containers, Blobs, and Metadata - Azure StorageThis can be contrasted with the Google service’s naming restrictions, which are basically “anything goes except newlines, directory traversals and “.well-known/acme-challenge” (which could be used to acquire a SSL certificate for Google’s cloud storage service using the ACME protocol, so naturally not allowed ) - About Cloud Storage objects | Google CloudAs the blob name is visible to the browser when downloading, the file gets the percent encoded name when downloading unless the downloading client chooses a different local file name.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK