I had gotten a number of rest extractor pipe lines running the other day and then paused them. I am now trying to resume them and I am getting an Startup Error message “Missing required field session key”. I do not see a field session key referenced anywhere in the documentation on the rest extractor. Are you able to give any additional inform of where I can find this field?
I'd like to inquire about the current status of the monitoring functionality in the workflow orchestration. Specifically, I'm interested in the setup for notification emails. It would be helpful to know if we are able to receive notifications in the event of a workflow failure. Could you please provide an update on this?
We have a RDF-model based on CIM that we have imported to CDF Data Model via Neat. We are experimenting with Pygen. We are happy about both NEAT and Pygen and looking forward to explore further. The names of the attributes of the dataclasses becomemes long with pygen and the CIM-naming convension. Example 1: Model: cim:IdentifiedObject.name pygen: identified_object_name proposed pygen short: name Example 2: Model: nek:NEKACLineSegment.wireSegmentKind pygen: nekac_line_segment_wire_segment_kind proposed pygen short: wire_segment_kind Generic: Model. namespace:ClassName.attrName pygen: class_name_attr_name proposed pygen short: attr_name Is it possible to add an option in pygen to genreate short attributes? The rule would be to only the string followed by the . (dot ) I’m also interested in discussing other solutions with you. One options is ofc to change the names in the model, but then we are not conforming to CIM… Thanks! Regards, Olav
Hi Neat Team, I know you are very busy, as I’ve been following all the commits and changes in the repo, and I really appreciate all the work you’re doing. We are probably going to be working on migrating from an asset-centric data model to a new domain data model that extends cdf_cdm, and one challenge we anticipate is that the existing model does not have an explicit equipment concept—only assets. Could you confirm if there are already plans within NEAT to address this, or if we can create a custom mapper using metadata.assetType to classify certain assets as equipment during transformation? Our proposed approach involves: Using metadata.assetType to filter and classify specific assets as equipment (e.g., pumps, motors, valves). Mapping asset metadata into the equipment structure in the new model. Retaining parent-child relationships where relevant. I understand this might not be a priority right now, but since this will likely be one of the tasks we’ll have to tackle soon, I wanted t
charts Daily average of the previous day If anyone knows a good way to do this, please let me know. I would like to see the previous day's 24-hour values by averaging the data over the day with a mean of 0 days 24:00:00 (resampling to granularity), but due to the UTC+9:00 specification, the previous day's processing does not occur until 9:00 AM in japan. I would like to see the previous day's nightly processing volume average at 8:00 AM. The only way I can think of is to use the “shift time series” to force it forward, i would be appriciated if anyone have a better idea..
Hello everyone, I’ve been working with Cognite Data Fusion for the past couple of months, and it has really caught my attention. Cognite AI can be a game-changer in our daily routine, helping us access the data and documents we need. I have a question regarding Cognite Industrial Canvas. I hope some of you are using it. Is there a shortcut key to switch from the arrow to the grab function? Each time I want to scroll the canvas, I have to click on the grab hand, which makes the process a bit slow. We have a shortcut key (“V”) to switch from the grab hand to the arrow, but I can’t find a shortcut key for the reverse. Can anyone help me with a solution? Thank you!
While reviewing the data-modeling-lifecycle documentation available on GitHub for Neat, one of the first steps mentioned is validating the sheet using Neat (a React app demonstrated in a YouTube video). However, I could not find any reference to the Neat UI for validating statements in the spreadsheet. Does this mean that this step can be skipped?
Can Unified namespace architecture be implemented in Cognite Data Fusion
Hi, I understand that we can define properties on edges, but it isn’t clear from the documentation how these properties are populated. I am trying to implement the following model And defined the data model using DML as follows How can I add a UserActivity edge with properties start_time and stop_time, that I can read while listing all the activities that a User has performed? Thank you
Hi Team, I wanted to know if it is possible to integrate Spotfire with Cognite Data Fusion.
I came across this documentation regarding how we can use GQL to override the type of direct relation when using a CDM feature.Is there a similar documentation to highlight the changes needed to be made in the YAMLs when using CDF toolkit to deploy our model? Please note that I tried dumping the working GQL, but the resulting yamls did not work when tried the other way around.
Is it possible to trigger an email for a set of dynamically provided email IDs from a Cognite function? Since I don’t have a static list, I can’t integrate the Cognite function with the Extraction pipeline to use its notifications. I couldn’t find any documentation on this. If not, can we integrate a third-party library for sending emails inside a Cognite function?
When we deploy Data Model using GraphQL then we can put some comments like documentation giving relevant info. This is helpful for people who look into Data Model’s graphql in web application. However, when we deploy using toolkit via yaml files the generated Data Model’s GraphQL do not have comments. Though we can add comment by editing in UI but that has to be done manually. Is there any way to have documentation in generated Data Model’s GraphQL when we deploy yaml files using toolkit?
I usesthe scripts below with replacing the my_sdk with my generated sdk, which was generated by Pygen, and show me this error “ my client object has no attribute 'config' ”. My scipts are my_sdk.config import global_config; global_config.validate_retrieve = False and my_sdk.config.global_config.validate_retrieve = False Any guidance?
I am trying to deploy data model using CDF toolkit in my azure devops pipeline. So I checked the following page: https://docs.cognite.com/cdf/deploy/cdf_toolkit/guides/cicd/ado_setup However, I see this message in top which says that its still in alpha stage. We need this deployment to be done in our prod environments so what is current recommended approach. Also let me know when this feature is expected to be available.
I am using CogniteSdk c# to query data model from CDF. When I tried to execute it, it did not return row items. Instead it just returned below result: What is the correct way to use the DataModel in C# CogniteSdk to return rows of data as per below?
Hi, The user is listed as UNKNOWN when the annotation is done manually from “Data explorer”. This is problematic as we need to be able to track who made the changes. Here is an annotation I added manually: And this is what I see from the API. I am able to identify the creating_app which is data explorer, but I need to know the user as well or at least some way to track which credentials made the change. UNKNOWN is not good enough. I’ve also notices that for our automated annotations this field does not work as expected. These are all made by the same user/credentials, but for some reason the are all different and it looks like the “creating_user” field is mapped to the “job_id” you get when sending a request to the API rather than the user that made the request. I don’t know about others, but we at least need to be able to track which users/credentials created the annotations, which batch job uploaded the annotation is not relevant to us at all.job.7445037198441218job.5162020989972303j
We have encountered a scenario where we are unable to continue deploying to our production environment. The deployment fails on the IAM groups which indicate an HTTP Error 400 with a note indicating the Cognite API Failed to bugger the request body. Here is the important parts of the error: ...Deploying 19 iam.groups(resource_scoped) to CDF...╭──────────────────────────────────────────────────────────────────────────────╮│ Traceback (most recent call last): ││ File ││ "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/cognit ││ e_toolkit/_cdf_tk/commands/deploy.py", line 423, in _update_resources ││ updated = loader.update(resources) ││ ^^^^^^^^^^^^^^^^^^^^^^^^ ││ File ││ "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/cognit ││ e_toolkit/_cdf_tk/loaders/_resource_loaders/auth_loaders.py", line 402, in ││ update ││ return self._upsert(items) ││ ^^^^^^^^^^^^^^^^^^^ ││ File ││ "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/c
I created a small configuration using “cdf modules init for the cdf_process_industry_extension_full” data model and tried to deploy. No changes to the configuration at this point have been made. I do intend to extend the data model. Deploying results with the following error: Deploying 1 spaces to CDF...Deploying 36 containers to CDF...Deploying 36 views to CDF...WARNING [LOW]: Resource(s) already exist(s), skipping creation. ← Note they shouldn’t aready exist??? Deploying 1 data models to CDF... ERROR (ResourceCreationError): Failed to create resource(s). Error: One or more views do not exist: 'Gemini_sp_enterprise_process_industry_full:WorleySB360ImageAnnotation/v1','Gemini_sp_enterprise_process_industry_full:WorleySBAnnotation/v1', ← All three related to Annotation?? 'Gemini_sp_enterprise_process_industry_full:WorleySBDiagramAnnotation/v1'. | code: 400 | X-Request-ID: 536ccf9d-05cb-932c-b032-2fc57fd77a4f | cluster: az-eastus-1The API Failed to process some items.Successful (2xx): []
I'm trying to run the Cognite DB Extractor via CMD window (as administrator, as local user...) but I get always this error (see log): requests.exceptions.SSLError: HTTPSConnectionPool(host='datamosaix-prod.us.auth0.com', port=443): Max retries exceeded with url: /oauth/token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))[1288] Failed to execute script '__main__' due to unhandled exception! I think this is because a proxy server is being used to access the internet from this VM. However, I'm using the same VM as the PI extractor and that is running fine (I had to do some settings in the PiExtractor.exe.config file in the bin folder of PI extractor as documented) I have set the following windows environment variables (This worked on the OPC UA extractor VM): HTTP_PROXYHTTPS_PROXYALL_PROXY I have installed root CA certificate of Proxy (ZSCALER) for local user account, for
I want to upload instances using CDF Toolkit. For example, data model I can get dump yaml from existing data model using toolkit like following: cdf dump datamodel I want to get the dump yaml of instances from my existing space and data model which I can use it to later deploy using toolkit to different space or env. I checked the doc here: https://docs.cognite.com/cdf/deploy/cdf_toolkit/references/resource_library#nodes But creating yaml file manually would be too hard. I have also tried to get dump yaml using python sdk but that dump yaml looks incomplete to be used for deployment.
Can we use a CSV file instead of a Yaml file containing details of the instances we want to populate in a view using CDF toolkit? If yes, is there a limit to it? Edit: What I mean to ask is whether we can use a csv file directly to populate a view in a data model, i.e. without using raw and transformations? Just like we can use a node.yaml file to directly deploy an instance in a view.
I figured out how to add a slider to my Chart, but how do I move or delete it?
In order to identify a 3D model and revision both a model ID and revision ID are needed (e.g. for display in Reveal). Looking at the Core Data Model, I can see the revision ID is available in CogniteCADRevision, but I don’t see where the model ID is present (was expecting it in CogniteCADModel).
I am trying to deploy a function to CDF using the deploy-functions-oidc repo in Github. It works perfectly fine for other functions, but for functions using the “requests” library i get the following error during pre-commit: “Library stubs not installed for requests. Hint: “python3 -m pip install types-requests”. I have included both requests and types-requests in the requirement.txt file. Anyone experienced the same issue?
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.