Recently active
Is there a recommended approach for using YAML configurations to automate the creation of spaces, containers, views, and data models in Cognite Data Fusion? How can we incorporate customizable parameters (e.g., space names, descriptions, and container properties) in the YAML files to make the process more flexible? Example YAML configuration for containers: containers: - name: "example_container_1" description: "First sample container" external_id: "example_container_1_id" properties: name: type: "Text" nullable: false parent: type: "DirectRelation" nullable: true isValid: type: "Boolean" nullable: true indexes: - index_name: "entity_name" type: "BTree" properties: ["name"] - name: "example_container_2" description: "Second sample container" external_id: "example_container_2_id" properties: identifier: type: "Text" nullable: false indexes: - index_name: "identifier_index" type: "BTree" properties: ["identifier"] Each container is defined with the following: Properties: Each property ha
my team has uploaded all the laser scan (360) files, unit wise but some files are having some error I want to check file with file number but in 360 all file name is seen as unknown, how can I find the particular file
We have an OPCUA extractor pulling data from an Ignition OPC Server which had been working well at a customer site for weeks. This is a try before you buy scenario so the issue of data loss notification is important and that we respond back to the customer as to reasons why this occurred. It seems something changed on the Ignition OPC server preventing the extractor from accessing the tag data. We can see that data stopped and when using the Cognite chart. However we did not receive an email notification of a pipeline failure over several days. Upon restarting the OPC Extractor service, we did receive the pipeline failure email as seen below. We know the data access problem is in the OPC Server so not trying to get help from Cognite with that but It seems that the pipeline SHOULD have sent a failure notification WITHOUT having to restart the service. That is the important fact to bring to Cognite’s attention. Failure email message: Error: Root node does not exist: ns=2;s=[IAWAT_Tags_En
Hello team, We have tried to query Entity view which has a property properties of type view Property which is a reverse direct relation. We are using the query endpoint and the instances.query() sdk method to do so. We want the details of properties to come in the select object of Entity. Providing the query below: { "with": { "0": { "limit": 50, "nodes": { "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ] }, { "or": [ { "equals": { "property": [ "node", "externalId" ], "value": "fba80a5d3b994db698e74b77fb96f1de" } } ] } ] } } }, "0_6": { "limit": 10000, "nodes": { "from": "0", "through": { "source": { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Property", "version": "1_7" }, "identifier": "entity" }, "direction": "inwards", "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Property", "version": "1
When saving & scheduling a calculation, how do I specify the offset? For example, I can specify that I want the calculation to run every day, but how do I specify that I want it to run at 12pm every day?
i am unable to find client secrete in learn portal can anyone help me out how to navigate to client secrete and generate client secrete thanks regards
Hello, I am a new member at DataMosaix. After uploading test time series data to the DataMosaix Cloud, I can’t find out the menu or option to delete the uploaded test data. Could you please guide me on how to delete the data from the cloud? Thank you!
I am trying to get a count of all instances in a view that have been updated since a point in time. When using the /models/instances/list api call, I am able to filter LastUpdatedTime with 'filter': { 'range': { 'property': ['edge', 'lastUpdatedTime'], 'gt' : '2007-01-01T00:00:00+00:00' } However when I try to use this filter method in the /models/instances/aggregate api call, I am getting a 400 error. I have tried different permutations of what the correct property string lastUpdatedTime is without success. Any ideas? I am able to filter by properties defined in the datamodel, but as I understand it LastUpdatedTime is not the same as a normal property
How do you set and see the interpolation rules for your dataset in CDF?
Our project has used the standard logging module for displaying information, warnings and errors in the Cognite Function log interface. This has worked on the Google tenant of our CDF (despite the warning in the docs), but no logs are shown in our new Azure tenant. Do we need to update our code to a different module, or write our own logger class? Or will the logging module be compatible?
In the section “Visualizing data”, in the step “steps for adding gas to energy ration”, I’ve done the steps to create the division expression for Gas to Power Ratio, however my chart just shows no data. I’ve already done the numerator and denominator aggregation in a separete way and they both shows data when added to the chart. What am I doing wrong in here? DAX with no data Numerator Denominator
Hello, I am using a Litmus Edge system and would like to ingest simulated data into Cognite Time Series by leveraging the Litmus Cognite Connector. Additional Details: We also tried manually creating a time series in Cognite Data Fusion (CDF) with the same name as in Litmus Edge, but we are still unable to successfully ingest data into the corresponding time series. Getting below error in Litmus Edge 151. 2024-12-12T06:33:25.360349408Z {"error":{"code":401,"message":"Unauthorized"}} 152. 2024-12-12T06:33:25.36049081Z publish message: request is unsuccessful: 401 Unauthorized
Hello, we have set up an extraction pipeline with notification alerts. One of the contacts defined is an email address for a Microsoft Teams channel: <xxx.yyy.onmicrosoft.com@emea.teams.ms>, but we have not received notifications to this channel the last couple of days. Are this kind of addresses blocked in some way? We know the alerts have been triggered becuase we have other contacts defined with different email addresses and they receive the alert.
Hello, Could we define default filters, locations, column list in search UI to not redo the same work every time we connect to industrial tools search UI. Could you please transform this into a feature request if this is not available today? Thanks
Hi Everyone,I need to create query like it is creating on cognite UI’s like { "listEntity": { "with": { "0": { "limit": 50, "nodes": { "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ] }, { "or": [ { "and": [ { "nested": { "scope": [ "slb-pdm-dm-governed", "Entity/1_7", "parent" ], "filter": { "in": { "property": [ "node", "externalId" ], "values": [ "1234" ] } } } }, } ] } ] } ] } } }, "0_2": { "limit": 10000, "nodes": { "from": "0", "through": { "source": { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" }, "identifier": "parent" }, "direction": "outwards", "filter": { "and": [ { "matchAll": {} }, { "hasData": [ { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version": "1_7" } ] } ] } } } }, "select": { "0": { "sources": [ { "source": { "type": "view", "space": "slb-pdm-dm-governed", "externalId": "Entity", "version":
Hello Team, I looking for supported list of data type which we can insert in raw table.
Hi everyone, one question:Does DB Extractor run on Windows 11?So far it hasn't worked for me
These overlap, this causes problems. The ”Open in canvas” box overlaps and blocks some of the annotations.
@Tanmay Deshpande and @Tom Jonsthovel , posting your question here to make it easier to engage the developer team for support. We have successfully created the ILA group and can call the API, however we currently get stuck on ingesting logs (400 - bad request). Please see our steps below. It would be great if you could advise on how to properly ingest logs and share a Jupyter notebook with some examples. Also, would be great to understand the link between the ingested log and the container. Eg, do we need to add a 'log' property to the container definition? Created a data model. Ingest a log into the container with externalId "Pump" and link to "Pump" instance with externalId "66470bf0-5c07-4a39-8878-0adf4dc7f448" Get 400 on return.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.