Recently active
Ability to require all task need to be complete or comment entered to move checklist to DONE Status. This would be more for Mechanical Integrity or Compliance related checklists and alerts/reminders if not completed
Problem Overview: When generating a Python SDK notebook using pygen with a NEAT-created data model, you might encounter the following error: UndefinedError: 'cognite.pygen._core.models.fields.connections.OneToOneConnectionField object' has no attribute 'data_class' Root Cause This error typically occurs when one or more properties in your data model are missing the "Name" field in the NEAT spreadsheet under the "Properties" tab. Although NEAT allows you to create and save a data model without filling in the property name, this leads to missing metadata when pygen tries to generate code — specifically, it fails when trying to reference data_class on an incomplete field definition. When It Happens You’ll see the error when calling: pygen = generate_sdk_notebook(data_model_id, client) ✅ How to Fix It Open NEAT spreadsheet to the Data Model causing the issue. In the "Properties" tab, ensure that every row has a value filled in for the "Name" column. If any "Name" values are missing,
In Cognite Academy Fundamentals : Working With CDF: Contextualize, I have followed all of the instructions several times for Entity Matching and, in all instances, get “No matches found”, so there is nothing to confirm or write to CDF. What am I missing here?
For file category view in Search, the user has to press “show more” many times to show all the files and categories related to an asset. This is an Adoption Blocker. I suggest loading all the categories as empty folders, and then when a user selects a category, load all the files under that category.
Hello, We are using cdf toolkit to deploy our container, views and data models. We recently added many new views, these views are dependent. i.e: they reference each other in the properties through a source block or via implements block. Here an examples of these view dependencies: - externalId: View1 implements: [] name: View1 properties: test: container: externalId: Container1 space: some_space type: container containerPropertyIdentifier: Test name: test source: externalId: View2 space: some_space type: view version: v1 space: some_space version: v1 - externalId: View2 implements: [] name: View2 properties: wow: container: externalId: Container2 space: some_space type: container containerPropertyIdentifier: Test name: wow space: some_space version: v1 or - externalId: Country implements: - externalId: CountryAttributes space: '{{sp_dm_dap_knowledge_graph}}_wv' type: view version: '{{dap_version}}' name: Country properties: Wells: connectionType: multi_reverse_direct_relation name: We
Hello everyone, my name is Matt. I just joined Cognite Hub. I’m an industrial control system enthusiast and recently have been part of meetings at work where Cognite has been brought up for a few different use cases in some of our business units. I’m interested in learning more about it and exploring the capabilities of Cognite/CDF. I just watched the Cognite Product Tour 2024 on YouTube and was really impressed.
I am new bee for this mammal, to Setup the Cognite file extractor for the local file upload to CDF, how to I config the environment variable in the yaml?according to the example-local.yaml I need to read:COGNITE_BASE_URLCOGNITE_PROJECTCOGNITE_CLIENT_IDCOGNITE_TOKEN_URLCOGNITE_CLIENT_SECRET I am little confuse, should I create a .env file to have all of those environment variables in that file and put it in the same folder as the yaml config file or can I inject those variables in the configuration file itself, if so any example? need someone lighting me up.
Building upon the recent enhancements of waypoints and ghosting, the ability to walk a user to a location from their current location or a selected start point is a use case for our contractors during daily maintenance or Turnaround activities. Often contractors have never been onsite and navigation to the work area or a specific asset would be helpful. Think google maps walking feature or a line that connects scan points in the 3d model to guide the user to the location.
Allow custom apps to be accessible in the InField Mobile view. This would allow customers to create and use a large variety of custom applications for users to easily access while in the field, while not having to leave the InField application.
We currently have the ability to turn off annotations & connecting lines in canvas. I do not always want these annotations because they can make the canvas become too “busy” on a canvas that I’m using on projects where I just want to share the canvas, collaborate, or present. Sometimes people I share the canvas with may not know how to turn these off. I would like for Canvas to cache or “remember” my choices so the annotations/connecting lines do not have to be turned off each time someone opens the canvas. Default should be to have these turned “on”. I would just like them to stay “off” once I select that choice. Attaching visual examples. I would be happy to answer any clarification questions. Thank you!
it says that the project is not valid and do not give me the hello world output.
Welcome to the CDF Fundamentals Discussion! This discussion is dedicated to help learners of the Cognite Data Fusion Fundamentals learning path succeed. If you’re struggling with the exercises in this learning path, try the tips & tricks below or post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
Hello everyone! I’m Gerardo an MES Consultant specializing in real-time data solutions for manufacturing. My goal is to learn more about Cognite and explore how it can contribute to my carreer, starting with CDF. Regards, Gerardo
got 401 "Unauthorized" error when using Postman and also sometimes a bad request error while I did everything as they did in the course. please help it took a lot of time from me.
Hi there, I know there’s a lot happening in NEAT right now, and we're doing our best to keep up with the changes and learn from all the improvements. That said, I have a question regarding the documentation. The docs refer to core_data_model, which doesn’t seem to exist. Should we be using the example to load the core data model instead? For example, the documentation shows: neat.read.cdf.core_data_model( ["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity", "CogniteDescribable"]) But in our case, we’re currently using: #extend the core data model through examplesneat.read.examples.core_data_model()neat.subset.data_model(["CogniteAsset", "CogniteEquipment", "CogniteTimeSeries", "CogniteActivity"])
Hello Cognite team, I would like to request consideration for adding the possibility of sum aggregation for synthetic time series. This feature would greatly enhance our data analysis capabilities and provide more comprehensive insights. We kindly request your consideration of this enhancement in your future development plans. Our technical team is available to discuss this in further detail and to provide any additional information that might be helpful. Best regards, Ievgen
Hi, I would like to check whether there is any CDF Admin UI or available APIs that provide the telemetry and usage-related insights such: User activity logs such logins, user actions, dataset access, frequency of use API usage metering such calls made by service accounts or integrations, volume over time Application usage tracking such as which integrations/apps are active, which endpoints are being used and etc. Quota and resource usage tracking, like number of API calls, storage/compute loads. Are there any CDF admin dashboards, telemetry APIs, or audit logs available for this purpose? Please advise. Thanks
How can we configure the OPCUA data node (in string) as event in Cognite instead of timeseries (by default)?P.S. The existing OPCUA Server does not support to configure it as Event.
I m working cognite hosted rest extractor and i m not able to perform incremental load and getting Kuiper http error while making a request. Can someone explain what the key name is when we use query params for incremental load and how should value look like in json having a conditional statement to pick a constant value in first execution and last_run from context after that? (Assume we have to modify startindex and lastindex query params after first excution)
Hi. I’d like to see more aggregation functions on the grafana data source. Currently only average, interpolation and step_interpolation are allowed. Any chance to implement support for Sum, Count, etc.? Error: count is not one of the valid aggregates: AVERAGE, INTERPOLATION, STEP_INTERPOLATION https://docs.cognite.com/cdf/dashboards/guides/grafana/timeseries
I am trying to use the CogniteSdk in c# to build my FDM model node and egde. I able to insert the node accordingly but having issue when trying to define the edge. As you can see below , to create an edge, it require the "Type" to be define. There is no error but when i check my UserDashboard model, it cannot link to my DashboardItem. Anyone know how we can insert Edge using C# SDK? FDM: type UserDashboard @view(version: "c3020ef716088a") { userId: String! createdDateTime: Timestamp lastUpdateDatetime: Timestamp dashboard: [DashboardItem] } type DashboardItem @view(version: "62c414860c7734") { userId: String! index: Int! id: Int! } UserDashboard DashboardItem
so in the Learn to Use the Cognite Python SDK in the data engineer course , I got stuck on the hands on test.as in the readme file after I created a dataset and a root assets I just do not know how can I do this section : - Read the `all_countries.csv` file as a dataframe and list all the unique regions in the world. - For each geographical region, create a corresponding CDF asset that is under the "global" root asset and is associated with the "world_info" data set. - Next, create country-level assets using the data from the CSV and link them to their corresponding region-level assets.
I’d like the ability to rename files. I do not care if it takes a long time to complete or even if I need to replace the internal CDF ID. I just need some feature or option that will allow me to rename the file. Sometimes, data pipelines require us to put files into cdf before we can craft a useful name, and having to write extra work-arounds seems silly, put that burden on CDF!
Jupyter Notebook in Cognite Data Fusion is currently in beta. We are very excited about the interest you have shown, and want to engage even more with you through this group. Please use this community to ask questions about Jupyter Notebook, suggest features or discuss problems. To get you started, here are some useful resources to look into: Jupyter Notebook documentation Jupyter Notebook tutorial Jupyter Notebook Academy microlearning course
We use Synthetic Time Series for as many things as we can to avoid using the trouble of re-indexing and interpolating in pandas. However, CDF Synthetic Time Series can’t do any basic logical operations, comparison operations or lag operations. For instance, if I want a time series that gives me 1 when my TS values is between an LCL and a UCL and 0 otherwise. In pandas, this is simple: 1 if pd.between(x, LCL, UCL) else 0 but since CDF STS doesn’t support if, and, not, >, < etc, so this doesn’t work. Another use-case was when I needed the moving average for the last hour. I had to make a time series and manage it’s creation, DM entry etc when I would have preferred to use an STS since the only use of that calculation was as an intermediate step. The map command in CDF STS looked promising but it doesn’t work without a string input. I’d like STS to support: if() and() or() not() lag() lead() movingAverage() movingSum() abs()
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.