I want to use node JS express as my back server to get data from cognite refer to theirhttps://developer.cognite.com/sdks/js/ for the backend:import { ConfidentialClientApplication } from "@azure/msal-node";import { CogniteClient } from "@cognite/sdk"; async function quickstart() { const pca = new ConfidentialClientApplication({ auth: { clientId: 'YOUR CLIENT ID', clientSecret: 'YOUR CLIENT SECRET', authority: 'INSERT AUTHORITY HERE' } }); async function getToken() { var response = await pca.acquireTokenByClientCredential({ scopes: ['INSERT SCOPE HERE'], skipCache: true, }); return response.accessToken; } const client = new CogniteClient({ appId: 'Cognite SDK samples', baseUrl: 'YOUR BASE URL', project: 'YOUR CDF PROJECT NAME', getToken: getToken }); const assets = await client.assets.list(); console.log(assets);} quickstart(); What is the authority and how can I find my authority?
Hi, I was wondering if it is possible to update an outdated old 3d model with 360 laser scan images. The original model from when plant was build was corrupted and now we have only parts. I think this would help maintenance enormously, the images are now in faro sphere, which is not the easiest to navigate around
I am trying to setup an asset hierarchy using the Core Data Model as shown in the example: https://docs.cognite.com/cdf/dm/dm_guides/dm_cdm_build_asset_hierarchy/ But I am getting some strange errors when I try to implement it. A normal container and view without a refernce to core data model works fines, but when trying to add cdm it fails. I am using the toolkit to deploy the data model. In the “Data management” tab I get this error: { "title": "", "message": "Cannot read properties of undefined (reading 'listPump')", "errors": []} In the “Query explorer” I get these errors: { "errors": [ { "message": "The field type 'Cognite3DObject' is not present when resolving type 'Pump' [@-1:-1]", "locations": [], "extensions": { "classification": "ValidationError" } }, { "message": "The field type 'CogniteSourceSystem' is not present when resolving type 'Pump' [@-1:-1]", "locations": [], "extensions": { "classification": "ValidationError" } }, { "message": "The field type 'CogniteAsset' is not
I am using the Hosted Kafka Extractor and need to subscribe to multiple topics with specific filters. To do this, I plan to create multiple jobs under one connection. I have a few questions: What is the maximum number of jobs that can be created in a single connection? How many hosted extractors can be created in a Cognite project? Does the Hosted Kafka Extractor support wildcard characters for topic subscriptions? I checked the Hosted Extractors and Kafka Extractor documentation, but I couldn't find clear limits on job or extractor counts. Also, the Kafka Extractor job seems to subscribe to a single topic at a time, so I'm unsure if wildcards are supported. Could someone clarify these limits and wildcard support?
I have to practice basics of data engineering , but my microsoft authenticator doesnt work for cognite fusion for doing handson activity as my authenticator is on old phone i have new phone now how to get authenticator for cognite fusion portal in new phone.pls reset my MFA
need to reset my MFA as my old phone is damaged
I am looking for clarity in the YAML requirements for limiting the visibility of Datasets in Cognite to specific roles. Like other datasetScope’d objects I’ve added datasetScope to my group definitions but deployments are failing, indicating the following error which itself is not very descriptive to indicate what the offending issue is: raise CogniteAPIError( │ │ cognite.client.exceptions.CogniteAPIError: Unknown field name: │ │ `datasetScope` at items.capabilities[13].datasetsAcl.scope.? | code: 400 | │ │ X-Request-ID: 26ec6765-06b8-9f77-9d95-ab49409fd950 In our group definition we’re trying to limit datasets as follows, and it’s failing: Adding to difficulty troubleshooting is the pre-build and dry-run deployments of our toolkit CI/CD ran successfully, and it wasn’t until the actual deployment that things broke.
Hello, I’m having trouble sharing a Jupyter notebook on Cognite. I used the right-click option to copy the shareable link, but when I or others click on the link, we encounter a 404 error like the following: Has anyone else experienced this issue? Also, they can’t find the notebook going to the folder, only myself can see it. Thanks!
I am seeing how to create Templates used to query data, but I have not yet found information related to defining GraphQL mutations. Do you know where I can find this information? Thanks! Here are some places I’ve been looking. Docs: Templates | Cognite Documentation About template management | Cognite Documentation Search results for "templates" | Cognite Documentation The tests in the Python SDK: https://github.com/cognitedata/cognite-sdk-python/blob/master/tests/tests_integration/test_api/test_templates.py There is some info related to GraphQL mutations and Cognite here, but this doesn’t seem related to Cognite’s Template functionality: https://itg.cognite.ai/docs/tutorials/data-ingestion/ingesting-data https://github.com/cognitedata/sample-cdf-graphql-angular-app/blob/9641e4967e0a288241ef68b3e94786b56de456c2/src/app/itg-api.service.ts#L125
I have some gaps regarding the Cognite python SDK I hope someone could fill: Is there a way to archive a dataSet with the sdk? When deleting a file now it doesn’t delete the reference so it looks like I still have files in my dataset. (attached picture) how can I delete the references as well? Is there a way to delete a datamodel using the sdk? In the documentation exists: from cognite.client import CogniteClient c = CogniteClient() c.data_modeling.data_models.delete(("mySpace", "myDataModel", "v1")) It runs but it doesn’t delete the dataModel for me. Please let me know if there is another way.
Hello, There’s a number of Functions that used to work as expected but now fail to be executed correctly on their set schedules on a CDF project. The .zip files associated with the files are not present. Is there a relation between these functions failing and the absence of the .zip files under the Data Explorer tab? Thanks,Marwan
Hello Team, Is there any utility to convert GraphQL model in CDF to yaml files which can be used in CDF toolkit for deployment. Currently, even after taking dump using SDK, we need to change the format of YAML as per the supported syntax. Thanks, Snehal,
my team has uploaded all the laser scan (360) files, unit wise but some files are having some error I want to check file with file number but in 360 all file name is seen as unknown, how can I find the particular file
I’ve encountered a bit of an interesting issue when converting annotated P&IDs to SVG files using this endpoint from the Python SDK: cognite_client.diagrams.convert(diagram_detect_results) If I send in one file at a time then it works as expected: But if I try to upload convert them at the same time then the annotations switch over for some reason. The annotation mapped to file_1 are shown on file_2 and vice versa. This also scales, but is a lot easier to demonstrate with just two files: I’ve attached the diagram detect results that I send in to the API, and I can’t see any reason why this would happen. This makes parallelization somewhat difficult as we need to convert each file individually to ensure correctness. Attachments:daigram_res.txt contains both files at oncedaigram_res_1.txt and diagram_res_2.txt contains each file as separate entities
Hi,I am encountering the following error while using Streamlit-AgGrid in Cognite’s Streamlit: This link explains the workaround, but I’m not sure how to do the fixing in Cognite. Could anyone help me and is there a specific version of Streamlit or Streamlit-AgGrid I need to use to avoid this problem? Thanks in advance for your assistance!
The December release added tabs for directly & indirectly linked data. I am having trouble finding documentation that explains the difference between the two.
I noticed when you're looking at a large time frame, the threshold feature on Charts only looks at the aggregated values, not the raw or even min/max values. Is this the same with the data profiler and monitoring jobs?
HelloWe have Flexible model which has Entity view and then a property named parent refers to Entity back as an edge. Now our user sends us an query to get all the entities say whose name is “SomeString”. And then we construct a query as below, but it errors out. { "with": { "Entity": { "nodes": { "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ]}, {"or": [ { "and": [ { "nested": { "scope": [ "slb-pdm-dm-governed", "Entity/1_7", "parent" ], "filter": { "in": { "property": [ "node", "name" ], "values": [ "SomeString" ] } } } } ] }]} ] }, "chainTo": "destination", "direction": "outwards" } }, "parent.Entity": { "nodes": { "from": "Entity", "filter": { "and": [ { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ] }, { "in": { "property": [ "node", "externalId" ], "values": [ "56debee29ada4609beff4c12bb9244a3" ] } } ] }, "through": { "
Hello, I am at the end of the first module in the Domain Experts course. In order to complete the knowledge, check I need to access the publicdata, however, when I login to Cognite selecting publicdata as the organization the Data Explorer tab is missing and therefore no data available. Has anyone else run into this issue and how did you correct it? I contacted support@cognite.com several days ago and no reply yet, so I thought I would try the community while I wait. Thank you
Hello team, we are facing issue in updating the state table with DB extractor. The windows service that runs for DB extractor is working fine. However, with the python script, when we try to update the state table, it is not working as expected. Attached is the config file for reference. Context: We have a configuration YAML file with 10 queries (attached here as .docx), which operates in continuous mode and includes cron schedules. The DB extractor is running as a Windows service. Additionally, we've written a Python script that is scheduled to run once a day and attempts to change the 'high' value for certain rows in the state table. When we run the Python script, it executes successfully, and the rows are modified as expected. However, the continuously running Windows service appears to modify the state rows as well. Could you please confirm if the data in the state table is being cached? Or is it not acceptable for two processes (the Windows service and the Python script) to update
I have changed phones and am not able to setup microsoft authenticator on the new one. Please reset the MFA Access so that I can set up the authenticator again.
I can not update (push) the existing timeseries by client.time_series.data.insert() method. data_points = [{'timestamp':pd.to_datetime(df5.index[i]).timestamp()*1000,'value':df5['My Feature'].iloc[i]} for i in range(len(df5))] client.time_series.data.insert(data_points, external_id=external_id_feature) The way I followed is : trim the specific date-range, insert the timeseries. Is there any standard way to update the existing time-series ??
i have an instance which has a property called alarmcode what i want to do is to filter the instance with multiple value of alarmcode. the alarm code is reference to another instance in Sql if i need to do this is like alarmcode in (‘a’,’b’’,’c’)
Hi, I am on the CDF Fundamentals learning path and is currently on the “Events Transformation” section. I am trying to create a data set named “FirstnameBirthyear-IFSDB”. I do not see an attached dataset at the bottom of the page, do I use the same dataset (assets CSV file) as before? The external ID for the new IFSDB dataset is 8667716811777202. Thanks!
Hi, I’m going through the steps for the course “CDF Fundamentals”, and stuck on Working with CDF: Integrate → Asset transformation → Switch to SQL editor (enter code). Initially I was able to enter the code with no problem, and replaced all my data “Stacey1994”.. in all the applicable places. When the error occurred for the first time, I refreshed the page and tried to repeat the steps, but when I got to the “For NULL values on updates: Keep existing values (Default)”, the error message below keeps popping up. Thanks, Stacey
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.