Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Team, I have customer has with a jupyther notebook that he is trying to use for Api Cognite Python SDK.Where do they found the correct information that must be in TENANT_ID? Any recommendation or documentation that I can specifically refer to? Thankyou in advance!
I am receiving the below error when executing a function I have created in cdf. If I am understanding it correctly the suod library is attempting to wrtie to a read-only directory. I have not found the suod library to allow for setting an alternate path. I did locate the dump command in the cost_predictor.py file, in which I placed commands to write in the /tmp directory. However, could I use some guidance on how to get the altered suod library loaded into cdf with my function? I have attached both the original and updated cost_predictor.py files with code to redirect the dump to the system temp directory and would appreciate any guidance on whether I am on the right path here or not. Thanks in advance! Error text:2024-05-16 04:15 Function started2024-05-16 04:15 : The function terminated unexpectedly. Please contact Cognite support if the problem persists.2024-05-16 04:15 : _point.py", line 459, in run_handle2024-05-16 04:15 : result = handle(*function_argument_values)2024-
Hi,I have some questions on Cognite. Would you please help?1. Files1.1 To delete Cognite files, after deleting the files do we need to delete the documents also? 1.2 After we create the files, the documents are created by Cognite automatically, but it seems there are some random delay to create the document, why?1.3 what is the concept of document in Cognite? Why is it created? Any documents on that?2. sequences2.1 To delete Cognite sequences, after deleting the sequences do we need to delete the datapoints also? 3. timeseries3.1 To delete Cognite timeseries, after deleting the timeseries do we need to delete the datapoints also?4. delete order for containers, views 4.1 to delete all FDM data in a space, the order of deletion should be edges, nodes, containers, view versions, data models, space. And the containers should be deleted first then delete the view versions, right?Thank you!Alice
Facing Session IDP refresh from UI while running workflows and transformations. Tried using client id and client secret. Facing this since yesterday Let me know if any changes required from our end
Is it possible to increase the limit on execution of workflow instances per project or limit on execution workflow instance itself instead of project ? as we require to schedule workflow, on that it depends how much data we get to run workflows instances, so it can be more than 50
Hey guys. I’m trying to create a function with some different constants. Do you know how can I change the name of the constants?
Is there currently a best practice for adding a geographical location to an object in a data model?I’ve considered simply using a string property to my model containing wkt-formatted strings, or a GeoSpatial feature external id, but none of them seem ideal. I assume there is a size limit on strings - so that wkt might be a bad choice? We’ve previously discussed this for timeseries, where you mentioned that data model geolocation was on the roadmap. Any update on this? 😊
Hi,In the documentation it is mentioned that 10,000 timeseries is the limit in each subscription. Is this the hard limit or it can be changed based on use cases? We are using CDF workflow and in that our first function to read data points subscriptions to get the timeseries and datapoints. Based on our use cases, 10,000 number for timeseries in subscription is too small. We are considering 10,000 wellbores and 200 properties. (10,000 * 200 = 2 Million timeseries). Any recommendations?
I’m working on a prototype for a flexible data model to store time series data in a way that is easy to catalogue, query and filter. Using Pygen both to populate and use the model seems convenient.At its current iteration, I’ve only applied direct relations and (undocumented?) @reverseDirectRelations in the GraphQL schema. I expected to be able do something similar to client.windmill(windfarm="Hornsea 1").blades(limit=-1).sensor_positions(limit=-1).query()as found in the Pygen documentation, but it does not work (my client.windmill analouge has no methods corresponding to its relations). Do I have to use edges instead of relations to query easily and declaratively with Pygen?
Hi, Is it possible to use multiple clients for the same Cognite Function? I know the handle function only takes one client as argument, but is it possible to initialize another client inside the function? I want my Cognite function to read from prod environment, but write to dev environment, which necessitates two clients. Thanks.
Are there any documented use cases or papers on integrating MLflow with Cognite, or is it something we need to implement ourselves?For example, if we aim to seamlessly integrate the MLflow UI with Cognite to evaluate and select the top-performing models, we could leverage SQLite, which operates on the local file system (e.g., mlruns.db) and provides a built-in client, sqlite3. However, our preference is to seamlessly integrate it with Cognite.
Hi all,I'm trying to increment my view version.To do this, I'm also incrementing my data model version as well.My original is```type Test @view(space: "test_space", version: "1_0"){ name: String!, description: String,}```and I try to go to```type Test @view(space: "test_space", version: "1_1"){ name: String!, description: String, test_write: String}```I've also tried ```type Test @view(space: "test_space", version: "1_2"){ name: String!, description: String, test_write: String, required_test_write: String!}```I get this error: { "title": "Error: could not update data model", "message": "An error has occured. Data model was not published.", "extra": { "type": "div", "key": "errors", "ref": null, "props": { "style": { "display": "block", "overflowX": "hidden", "overflowY": "auto", "maxHeight": "150px" }, "children": { "type": "ul", "key": null, "ref": null, "props": {
Within part 2 of the course - on the the page named “Exploring nodes, edges and direct relations” there is a table stating that Edges counts towards instance limits, but within point 4 of the Part 2 Summary the opposite is stated?
We are a bit unclear on the difference in meaning between the "Uploaded at" vs. "Last Updated" times for Files in CDF.For example, we have seen un-intuitive examples where the "Uploaded at" time is newer than the "Last Updated" time - we would expect that would never be the case.Can you define the logic for these two fields, and update the documentation here? https://cognite-sdk-python.readthedocs-hosted.com/en/latest/files.html#module-cognite.client.data_classes.filesThank you.
What access capabilities do I need to run transformations as “Current User”. I have a user who don’t see “Run as current user” as in screen shot. Next screen shot is mine, and I can see it probably because I am added as admin
CDF - Filter option is not working as expected under common filters at Data Explorer screen. Login to CDF Click on Data Explorer tab in CDF menu bar. Click on Files tab in right side of the panel. Set Data set as 'src:006:documentum:b60:ds under Common filters in left side of the screen. Select the check box ‘Before’ under common filters in left side of the penal. Click on the Calendar icon and set data as (e.g.) '10-01-2023' Expected results: Document ‘Amarjeet_Test_DT.docx’ should not display in results window because its created after the set date.Actual results: Document Amarjeet_Test_DT.docx is displaying in CDFNote Issue exists for all Date filters like Created time, updated time with Before, After, During in CDF, user want to know what date is used for filtering the documents in CDF with these filters.
Hello, When I tried to run the DBExtractor, I get the following error: “polars\_cpu_check.py:232: RuntimeWarning: Missing required CPU features.The following required CPU features were not detected: avx, avx2, fmaContinuing to use this version of Polars on this processor will likely result in a crash.Install the `polars-lts-cpu` package instead of `polars` to run Polars with better compatibility.Hint: If you are on an Apple ARM machine (e.g. M1) this is likely due to running Python under Rosetta.It is recommended to install a native version of Python that does not run under Rosetta x86-64 emulation.If you believe this warning to be a false positive, you can set the `POLARS_SKIP_CPU_CHECK` environment variable to bypass this check.”After doing some googling, I was able to install the polars-lts-cpu package referenced using python, but I got the same error. I’m not sure how to make the extractor reference the polars-lts-cpu package when it runs. See attached screenshot.The extractor i
I am facing this error in the data science course on “creating cognite functions” with cognite SDK. In previous courses I had fixed this error by replacing the “datapoints” keyword with “time_series”. However, I would like to know if I am probably not using the write packages; OR not know if the commands are deprecated and have new function names in newer versions. Kindly let me know - I am trying to finish these courses before my local boot camp next week.Thanks,Lavanya
Data Workflow UI stopped rendering the workflow with the following error message. It stopped working as soon as I created a CDF function to trigger the workflow. https://delfi-us.fusion.cognite.com/shaya-dev/flows/SDM-ARP-Model-Refresh?cluster=az-eastus-1.cognitedata.com&env=az-eastus-1
Deployment of workflow , can you guide how we can automate deployment of cognite workflows using sdk
Is the following use case a good fit for Data Workflows?Cognite Function that is reading data from Time Series and writing "event-like" data into Data Modeling continuous processing for metric type A ideally an execution every minute, but more importantly no overlapping executions (i.e., if execution takes longer than a minute) passing state from one to another function execution hourly processing for metrics type B Idea:2 Data workflows:Hourly execution of trigger Cognite Function to calculate metric B Daily execution of a workflow that triggers Cognite Function that recursively outputs a dynamic task, which calculates metric A. The Function outputs information for the next execution run (i.e., timestamp for next execution, state, and other info). At the end of the day the dynamic task would not output a timestamp for the next execution and workflow would complete. The execution trigger could also be daily.
Hi Team, Is there any possibility that I can register for Bootcamp virtually or it can be in India location as I am from India.Also, want to know is the Bootcamp is for individual or group and the cost for attending the Bootcamp? Thanks,Navyasri Indupalli
After how much time retry of function get triggered on failing?Is there a way we can set the time to retrigger the function after function fails. It will be helpful to deal with rate limit in functions if any.
HiThere are some bugs when doing contextualization in the fusion GUI. It should be possible to “select all” when i do a search query in the interactive engineering diagrams contextualization workflow.
While testing versions of the cognite-sdk python >= 7.37.0, we noticed a performance issue in the "retrieve_dataframe" method of the time series API for all versions 7.37.x ( x >= 0 and x <= 3). Example: For the same data window, we tested with version 7.0.0 and the latest versions. The processing time for version 7.0.0 in this method was less than 5 seconds, while for newer versions the processing time was 3 minutes.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.