Tell us what features you’d like to see and support other ideas by upvoting them! Share your ideas in a user story format like: As a ˂role˃, I want to ˂function˃, so that ˂benefit˃. This will help us better understand your requirements and increase the chance of others voting for your request.
I am following up on the ability to view native files such as DWG and DGN in Cognite? We were promised that this will be enabled at some point.
This product idea is created on behalf of CNTXT.Show a warning message on size limitation from UI when someone attempts to upload a large file that might show issues when previewing.
Hi. I’ve got a simple data model set up, with the following view.type MonitoredObject { name: String tradeTimeSeriesName: String watercourse: String statusTimeSeries: TimeSeries rawTimeSeries: TimeSeries prepTimeSeries: TimeSeries}I’m trying to make a dashboard in Grafana, to display the values from the “statusTimeSeries” of all MonitoredObjects. I’d like the labels to display the value from the “tradeTimeSeriesName” property. The following query renders the time series correctly, but, I’m unable to access the “tradeTimeSeriesProperty” as label{ listMonitoredObject { items { tradeTimeSeriesName statusTimeSeries { __typename externalId } } }}The time series render correctly. But I’d like to give them another label. In the table view, I see that the retrieved tradeTimeSeriesName values appear on a different row.tradeTimeSeriesName appears in a table view like this Any suggestion how I could change my graphQL query or use transformations in Grafana to get the label I want on the status history panel? For the other query types in the Cognite Data Source for Grafana, there is a “label” field, where I could enter for instance {{description}} to set the label. Is there anything similar for the “Data Models” query type?
Dependency-based Questioning: If an operator responds to a certain question, subsequent questions can be automatically displayed based on that response. This is a very important feature to have for 3 out of 4 M&M Sites.Washington Works has more that 300 hundred Templates and multiples has the need to have this feature. The same applies to Richmond site. This should be considered.
When I look at dates in CDF, it represents them as AA.BB.YYYY. As an American, used to seeing dates in the MM-DD-YYYY format, I have to think a minute about which is the day and which is the month. Specifically, I’m looking at the list of data models and the Last Modified column. Charts uses YYYY-MM-DD, which is somewhat better as it’s more clear, but still not “normal”. Extraction Pipelines tell you the delta “3 months ago” or “2 days ago” with a tooltip in YYY-MM-DD.It would be nice if dates were shown using the browser’s locale. There are functions in most languages to get the browser locale and format the dates accordingly. So, I would see MM-DD-YYYY and my European colleagues would see DD-MM-YYYY. It would be nice if it were done consistently across CDF too.Note, I’m not talking about time zones, only the date/time formatting. Adjusting for time zones is another topic entirely.
Hi, Is it spossible to tell the witmsl extractor (using the yaml file) to not update (overwrite) metada something like not overwrite options to metadata so its not overwritten , just like the PI extractor? Thank you,
When we use transformations, we tend to generate many of them for testing purposes and every user do the same this leads to have lots of transformations in the transformations section but, there is no any type of arrange or ways to organize this info which is all spread with no type of grouping at all.It would be nice to have the option to create something like folders or projects with a distinctive name and then me as a user can put all my transformations inside it so they are all put together in this folder-type arranging structure making it easier to find and not creating a vast list of transformation in the GUI: Also create folders inside folders, etc…. something like that so i can create for example “Sebastian” folder and inside it i can create some subfolders (or a branch type organization) and create more folder to organize “testing” , “production”, “Witsml” or whatever is useful to me to find and organize better the info and not mix with others work.
I have a use case where I need to retrieve all Cognite Functions and perform operations on a subset of them based on specific criteria. To achieve this, I want to use the metadata field in Cognite Functions as a filtering mechanism. However, the current limitation in the Cognite Functions API does not allow updating metadata for existing functions or adding metadata to functions that were created without it. Current LimitationCurrently, the only way to modify metadata for a function is to delete and re-upload the function. This approach has significant drawbacks:Loss of Schedules: Deleting a function also deletes any associated schedules, which need to be recreated manually. Loss of History: Run history is also lost upon re-uploading, impacting the ability to analyze past runs.Proposed FeatureIntroduce functionality in the Cognite Functions API that allows:Updating the metadata field for existing functions. Adding metadata to existing functions that do not have metadata.BenefitsImproved Filtering: Users can dynamically tag or categorize functions with metadata and retrieve only relevant ones for specific operations. Preservation of Schedules and History: Avoids the need to delete and recreate functions, preserving associated schedules and run history. Enhanced Flexibility: Makes Cognite Functions more adaptable to evolving use cases without requiring disruptive workflows.Use Case ExampleIn my use case, I would use metadata to tag functions based on their purpose (e.g., [CALC], [ANALYTICS]) and filter them efficiently. The inability to update/add metadata on existing Cognite function prevents me from easily maintaining and managing my function.
Hello Team,Currently, CDF project is provisioned with default admin group which is mapped to source group id.To setup CDF project, a person who has admin access needs to create a group with capabilities needed for bootstrapping and map it to source group. After this the project can be bootstrapped using toolkit through automated pipeline.If we get a default bootstrap-admin group, similar to oidc-admin group at the time of project creation, the manual steps will no longer needed, and the full project setup process will be automated.Thanks,Snehal.
In data search, the 3D model of a searched equipment or instrument tag is focused. However, after slicing or maneuvering around the model, the user could not focus back on the equipment or instrument that was searched.
ProblemCurrently, Cognite's GraphQL API requires specifying the unit system for each attribute individually. This approach can be repetitive and impractical when querying multiple attributes, especially for large-scale use cases.Proposed ideaIntroduce a global unit system parameter at the query level, allowing users to define a default unit system (e.g., metric or imperial) that applies to all attributes in the request. Individual attributes can still override this setting if needed.
Develop an advanced training program to equip users with skills for contextualizing point cloud data, focusing on both detected and undetected objects. The training should address gaps in traditional modeling approaches by providing practical, hands-on experience with diverse scenarios. Challenges Addressed:Limited automation in object detection, requiring significant manual effort. Difficulty in contextualizing objects that remain undetected in raw point cloud data. Inability to handle diverse and complex industrial scenarios effectively.Hands-On Examples and Exercises: Detected Objects: Import and preprocess a point cloud dataset. Use AI-driven tools to identify and classify detected objects. Automatically link detected objects to an asset hierarchy, metadata, or P&ID diagrams. Undetected Objects: Demonstrate manual workflows for identifying undetected objects within the point cloud. Tag, classify, and link undetected objects using the training interface. Show how to create relationships between manually contextualized objects and other datasets Use examples from different industries (oil & gas, manufacturing, energy, etc.) to cover various asset types.Include examples of objects partially occluded, poorly defined, or from atypical asset classes. Resource Materials:Develop a library of sample datasets, best practices, and case studies for ongoing reference.
Monitoring jobs should have the ability to add more options to thresholds than just “above” or “below”. We have use cases where the threshold needs to be “between” and allow the ability to add two values. I can see users wanting to do above and/or equal to, below and/or equal to, equal to, or outside of a specified range.
The Cognite Pi extractor is case sensitive and it will only pick up tags with the exact name as they are in the OSI Pi server data source. This is causing major issues for our data ingestion process since the tags shared by the use case proponent are not entirely the same as on the Pi server itself (may have different capitalization of letter through its name). For example:- Shared tag name: A01AAB0A.pv- Tag name in PI server: a01AAB0A.PVIn this situation, the following config would fail to ingest the listed tag: extractor: include-tags: - "A11AAB0A.pv"It would be great if the PI extractor could have a case-sensitive: true/false flag to allow case-insensitive tags filtering for ingestion.For example, something like this would allow the listed tag to be ingested: extractor: case-sensitive: false include-tags: - "A11AAB0A.pv"We often fail to ingest timeseries with this kind of discrepancy, which causes inconvenience on the user side and consume time from our side as Cognite Pi extractor maintainers. We believe it would be much easier to read/ingest the pi tags regardless of the capitalization of the letters or if we could add a flexibility parameter or a regex expression that would pick up a greater specific range of a specific tag.
I’m investigating the Pi Asset Framework extractor. I see how to bring in assets and I think I see how to get time series data. However, I don’t see how to bring in event frames. I’m not an expert with Pi AF by any means, but it seems like these would map nicely to CDF events. What I don’t see is how to configure the extractor to bring in the event frames. Does it do this? What do I need in the config file to turn it on? Is there additional documentation on the PI AF extractor beyond what’s available here https://docs.cognite.com/cdf/integration/guides/extraction/pi_af/pi_af_configuration?
Hi,On behalf of Celanese,I’m working on creating Data Workflows for a project that run transformations requiring specific credentials.The credentials set in the Workflow Trigger can call the transformations, but they do not (and should not) have the same level of permissions as the credentials defined for the transformations themselves.The issue I’m facing is that transformations are being executed in “current user” mode instead of “client credentials” mode which makes the transformation throw a missing capability error. Unfortunately, there’s no option to specify this behavior when setting up a Workflow Task.Is there a way to overcome this limitation?
On P&ID's with many connectors, it is inconvenient to try to record which line number you were on on each page when you go to the next page. There is no way to see the previous page or get back to it. Please highlight where you came from.The following image shows the original P&ID where the user shows the next P&ID that needs to be directed to. This image shows when directed to the said P&ID. but cannot find from where was the user directed from.To find this needs to keep another note. which is inconvenient. Highlighting where was directed from will be very helpful.
Have the users able to edit the locations in the 3D model for the activities that are not shown in the 3D model. Some activities may not be directly associated with a “tagged” item in the 3D model. There may still be a work package (replace grating, small valve, pipe support, etc). Allow the users to select both a point and a polygon to display the work area in the 3D model and associate it with an activity in Maintain.
Message kept prompting after clicking on Streamlit App everytime the user logs in. Perhaps a disable button can be provided to avoid frequent message pop-up.
In order to request aggregates from large views in Data Modeling into PowerBI, it would be beneficial to have the /aggregate endpoint exposed in OData (and PowerBI as a result). Otherwise all the data needs to downloaded to PowerBI first and perform the aggregates locally, this lowers the performance and is not possible for views with millions of instances.
Hello, Is there a way that users can define their user defined function in CDF so they can be used in transformations? We duplicate the same logic in different transformation and we would like to centralize it in one function. (Similar to what you are doing with `is_new` or `cdf_assetSubtree` function internally) Thanks,MT
Right now, in order to update all tasks under a group, the user needs to go to "..." -> set all tasks to -> "OK" or “Not OK”. However, this is too much work, I want to be able to just click a button on the header of the group and set all actions to “OK”, “N/A”, “Not OK”.I want something like this:
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK