Recently active
We are utilizing Infield with iPhone and iPad.While the mobile mode display of Infield fits perfectly on the smaller screen of the iPhone, we believe that on the larger screen of the iPad, more information could be displayed on a single screen.For example, would it be possible to improve the mobile mode layout to resemble the desktop mode with a checklist on the left, task items in the center, and asset information on the right? I would appreciate it if there were options to choose layouts for both iPhone and iPad. It is possible to switch from mobile mode to desktop mode on the iPad, but in this case, Infield cannot be used offline, so we hope to display more information in mobile mode.
Current Alert notifications are generic and required user to open up to see the details. Suggest to allow customization of the subject header so at once glance user will be able to know the information about the alerts. Provide a way to send alerts to email distribution groups on top of individual. Provide duplicate function in monitoring to create monitoring Create monitoring from Calculation on top of Timeseries tag only
When creating a checklist from a template to work on checklist immediately you can only select the date range for the planned work. The time the checklist shows defaults to 12:00 am the date the checklist is created. It would be useful if the checklist creation would default to the time the checklist is created or give the option to enter a time.
Allow Point Cloud and 360 Images to be compatible in Maintain
To improve the user-experience when creating annotations in a P&ID, it would be beneficial to bring the area that the user has drawn a rectangle over into focus when creating annotations. I have experienced that the side panel on the right-hand side for creating tag annotations gets in the way and it is irritating to keep moving around the drawing to keep the rectangle in view. Also as a user I want to be able to easily read the tag or document text in order to create the correct manual annotation that wasn’t recognized by OCR.
A user (jsablok@slb.com) from SLB reported encountering a 401 response when attempting to hit the API https://api-docs.cognite.com/20230101/tag/Projects/operation/listChildProjects/ using a service principal that is part of the admin group. Upon consulting with the Auth team, it was identified that EntraID service principals cannot be used to interact with CogIdP endpoints. This limitation has been acknowledged and added to the support backlog for future consideration.
Ability to require all task need to be complete or comment entered to move checklist to DONE Status. This would be more for Mechanical Integrity or Compliance related checklists and alerts/reminders if not completed
For file category view in Search, the user has to press “show more” many times to show all the files and categories related to an asset. This is an Adoption Blocker. I suggest loading all the categories as empty folders, and then when a user selects a category, load all the files under that category.
Hello everyone, my name is Matt. I just joined Cognite Hub. I’m an industrial control system enthusiast and recently have been part of meetings at work where Cognite has been brought up for a few different use cases in some of our business units. I’m interested in learning more about it and exploring the capabilities of Cognite/CDF. I just watched the Cognite Product Tour 2024 on YouTube and was really impressed.
We currently have the ability to turn off annotations & connecting lines in canvas. I do not always want these annotations because they can make the canvas become too “busy” on a canvas that I’m using on projects where I just want to share the canvas, collaborate, or present. Sometimes people I share the canvas with may not know how to turn these off. I would like for Canvas to cache or “remember” my choices so the annotations/connecting lines do not have to be turned off each time someone opens the canvas. Default should be to have these turned “on”. I would just like them to stay “off” once I select that choice. Attaching visual examples. I would be happy to answer any clarification questions. Thank you!
it says that the project is not valid and do not give me the hello world output.
Welcome to the CDF Fundamentals Discussion! This discussion is dedicated to help learners of the Cognite Data Fusion Fundamentals learning path succeed. If you’re struggling with the exercises in this learning path, try the tips & tricks below or post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
Hello everyone! I’m Gerardo an MES Consultant specializing in real-time data solutions for manufacturing. My goal is to learn more about Cognite and explore how it can contribute to my carreer, starting with CDF. Regards, Gerardo
got 401 "Unauthorized" error when using Postman and also sometimes a bad request error while I did everything as they did in the course. please help it took a lot of time from me.
Hi there, I know there’s a lot happening in NEAT right now, and we're doing our best to keep up with the changes and learn from all the improvements. That said, I have a question regarding the documentation. The docs refer to core_data_model, which doesn’t seem to exist. Should we be using the example to load the core data model instead? For example, the documentation shows: neat.read.cdf.core_data_model( ["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity", "CogniteDescribable"]) But in our case, we’re currently using: #extend the core data model through examplesneat.read.examples.core_data_model()neat.subset.data_model(["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity"])
Hello Cognite team, I would like to request consideration for adding the possibility of sum aggregation for synthetic time series. This feature would greatly enhance our data analysis capabilities and provide more comprehensive insights. We kindly request your consideration of this enhancement in your future development plans. Our technical team is available to discuss this in further detail and to provide any additional information that might be helpful. Best regards, Ievgen
How can we configure the OPCUA data node (in string) as event in Cognite instead of timeseries (by default)?P.S. The existing OPCUA Server does not support to configure it as Event.
I m working cognite hosted rest extractor and i m not able to perform incremental load and getting Kuiper http error while making a request. Can someone explain what the key name is when we use query params for incremental load and how should value look like in json having a conditional statement to pick a constant value in first execution and last_run from context after that? (Assume we have to modify startindex and lastindex query params after first excution)
I am trying to use the CogniteSdk in c# to build my FDM model node and egde. I able to insert the node accordingly but having issue when trying to define the edge. As you can see below , to create an edge, it require the "Type" to be define. There is no error but when i check my UserDashboard model, it cannot link to my DashboardItem. Anyone know how we can insert Edge using C# SDK? FDM: type UserDashboard @view(version: "c3020ef716088a") { userId: String! createdDateTime: Timestamp lastUpdateDatetime: Timestamp dashboard: [DashboardItem] } type DashboardItem @view(version: "62c414860c7734") { userId: String! index: Int! id: Int! } UserDashboard DashboardItem
so in the Learn to Use the Cognite Python SDK in the data engineer course , I got stuck on the hands on test.as in the readme file after I created a dataset and a root assets I just do not know how can I do this section : - Read the `all_countries.csv` file as a dataframe and list all the unique regions in the world. - For each geographical region, create a corresponding CDF asset that is under the "global" root asset and is associated with the "world_info" data set. - Next, create country-level assets using the data from the CSV and link them to their corresponding region-level assets.
I’d like the ability to rename files. I do not care if it takes a long time to complete or even if I need to replace the internal CDF ID. I just need some feature or option that will allow me to rename the file. Sometimes, data pipelines require us to put files into cdf before we can craft a useful name, and having to write extra work-arounds seems silly, put that burden on CDF!
Jupyter Notebook in Cognite Data Fusion is currently in beta. We are very excited about the interest you have shown, and want to engage even more with you through this group. Please use this community to ask questions about Jupyter Notebook, suggest features or discuss problems. To get you started, here are some useful resources to look into: Jupyter Notebook documentation Jupyter Notebook tutorial Jupyter Notebook Academy microlearning course
We use Synthetic Time Series for as many things as we can to avoid using the trouble of re-indexing and interpolating in pandas. However, CDF Synthetic Time Series can’t do any basic logical operations, comparison operations or lag operations. For instance, if I want a time series that gives me 1 when my TS values is between an LCL and a UCL and 0 otherwise. In pandas, this is simple: 1 if pd.between(x, LCL, UCL) else 0 but since CDF STS doesn’t support if, and, not, >, < etc, so this doesn’t work. Another use-case was when I needed the moving average for the last hour. I had to make a time series and manage it’s creation, DM entry etc when I would have preferred to use an STS since the only use of that calculation was as an intermediate step. The map command in CDF STS looked promising but it doesn’t work without a string input. I’d like STS to support: if() and() or() not() lag() lead() movingAverage() movingSum() abs()
I have a data model with a many to many relation between view 1 and view 2. This is modeled as an edge and stored as columnEdgeView2 in view 1 and has an edge pointing the other direction in the other view. I have been digging through the Python SDK documentation but cannot figure out how to retrieve this information. Can anyone help me with this? What I need to do is query either of the edge properties and do something similar to the instance.list of regular views. If possible, getting the columnEdgeView2 included in the result when listing instances the normal way would also work. I have tried to replicate my data model structure below. type View1 { column1: String columnEdgeView2: [View2]} type View2 { column1: String columnEdgeView1: [View1] @relation( type: { space: "space", externalId: "View1.columnEdgeView2" } direction: INWARDS )} Thank you!
Hello Team,I am trying to check like how many edges(entities in my case) are null in the view(event) is there any specific way to do so? I am trying right now sdk query api like below, but not working. Please give your insights. { "with": { "Event": { "nodes": { "filter": { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Event", "version": "1_7" } ] }, "chain_to": "destination", "direction": "outwards" }, "limit": 10000 }, "Event_2_entities.Entity": { "edges": { "from": "Event", "direction": "outwards", "filter": { "and": [ { "equals": { "property": [ "edge", "type" ], "value": { "space": "slb-pdm-dm-governed", "externalId": "Event.entities" } } }, { "and": [ { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Event", "version": "1_7" } ] }, { "not": { "exists": { "property": [ "slb-pdm-dm-governed", "Event/1_7", "entities" ] } } } ] } ] }, "chain_to": "destination" }, "limit": 10000 }, "entities.Entity": { "nodes": { "from": "E
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.