Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
While going through the academy course, many pages are not displaying properly. Starting with the “Check your knowledge” page under section “Introduction to entity matching” through the remainder of the course.
Hiwhile attempting to run SELECTworkpackage_desc AS description,to_timestamp(completion_time/1000) AS endTime,to_timestamp(start_time/1000) AS startTime,type,subtype,concat('FirstnameBirthyear:',key) AS externalId, 1234567890 AS dataSetId FROM IFSDB.maintenanceWHERE to_timestamp(completion_time/1000)>to_timestamp(start_time/1000) I get this error Verify that the 'IFSDB.maintenance' relation exists | code: 400 | X-Request-ID: 019f535a-a30f-9184-b3d3-966184e9d36b Note that:I substituted my Name and birthyear changed the dataset idMy assumption: sure I have a IFSDB.maintenance RAW (not sure if I skipped something in the lesson...)
Hi team, In Hess team came up with one request from Documentum Extractor, please refer below context for the request. Currently we have Cognite connected to Documentum via the raw folder, which pulls in the raw file format from EDMS. We actually need to connect to Documentum's rendition folder where the PDF versions of all the files are saved. Cognite can only effectively contextualize PDF's, so we need to connect to the render folder directly. We contacted the documentum team, however they've said that only people on the Documentum team can connect to that folder. We just need the Cognite extractor to have access to that Rendition repository. Please let me know if you need more information. project - hess-dev, hess-us
When we are selecting different date range, the granularity is changing for different number of days selection. For 2 days - 3min; For 3 days - 5min.Can we get access to any document how this pattern is designed.
Hi Team, Please enable and configure Air setup in the below CDF project. (Accenture-demo-dev) under cluster EUROPE1-GOOGLE. Please let me know for any details.
Hello, I have a chart I have already created with multiple time series. Is it possible to link that chart to a canvas? Thank you.
As per below pic, we need to click “Create dataset” after navigating to the “Use the Data Catalog”. But I am unable to find this option for creating dataset, can someone help?
Hi Team,I am trying to login into Cognite Support Portal (https://cognite.zendesk.com/hc/en-us/requests/). I am not able to enable MFA for my account using Microsoft Authenticator. Please help me with the steps to do this.I am getting below window while trying to login, but not getting any code on my mobile Authenticator app.
Hi, I am on Data Engineer Learn training. When I try to make my first API request for the publicdata project, I get a 404 Not Found error: The GET url I was asked to use is{{baseurl}}/api/v1/projects/{{project}}/assets I also replaced {{project}} with ‘publicdata’ with same result. Kindly let me know.
I want to complete Data Engineer Basics-Integrate but unfortunately for the access I need Purchase with 2 Credits.Can someone advise on this RegardsAmelia
We are having performance issues for event retrieval whenever there are more events even below 500.Can you help me to understand how partitioning can be done or pagination, as we need to perform operation for each event after retrieval and send it as a api response.I tried below: for unstructre_insight in client.events(type="Insight", limit=None,partitions=10): (“do something with unstructre_insight in each partition in parallel to reduce response time”)I observed that if there are 64 events, then all 64 events are retrieved in one execution, how we can get one partition at a time and perform something for first partition in parallel while retrieving second partition to reduce time.
Hello Community,cdf stores the data in cloud, could anyone know what is the database / database structure being used,and any reason for using Spark SQL for transformations.
We’re attempting to execute 10 concurrent jobs, and each one will create approximately 20 Timeseries objects, immediately ingesting datapoints for these newly created Timeseries objects in a loop.However, if we try to run 500 similar jobs with 10 concurrently running at a time, about 10% of the jobs fail due to a ‘Timeseries not found’ error.We theorize that the creation of a Timeseries object can take some time. Therefore, immediately ingesting the datapoint might fail because Cognite doesn’t refresh quickly enough to find the corresponding Timeseries object. This issue only occurs when we attempt to run multiple concurrent jobs.Has anyone else encountered similar issues?
I’m trying to create assets as mentioned in the ## 2. Create The Asset Hierarchy section of the Hands on tasks but I’m getting an error, although I have checked the solution and it’s exactly the same solution you have provided.The task says:For each geographical region, create a corresponding CDF asset that is under the "global" root asset and is associated with the "world_info" data set.My solution:list = []for region in df['region'].unique(): asset = Asset(name=region, parent_id=14499569942375, data_set_id=621288550636820) list.append(asset)client.assets.create(list)The error message i’m getting when running the above code:ValueError Traceback (most recent call last) Cell In[100], line 6 3 asset = Asset(name=region, parent_id=14499569942375, data_set_id=621288550636820) 4 list.append(asset) ----> 6 client.assets.create(list) File ~/workspace/using-cognite-python-sdk/.venv/lib/python3.10/site-packages/cognite/client/_api/assets.py:432, in AssetsAPI.create(self, asset) 402
Hello! I have 3 quick questions that come to mind:From your perspective, what is the 1 sentence value statement of CDF?How can we justify all the manual work required to prepare the data to ingest into the platform?How do our customers save money by using CDF?
HiThere seems to be an issue today where the workflow jobs scheduled for this morning is created, but the transformations in it is not triggered. In other words, the workflow is running, but nothing is happening. I also noticed a similar issue on Sunday (17.03), where I had a workflow job that was created at 06:00, but the first transformation was triggered at 13:00. Is this a known issue?Thank you!Sebastian
type Employee {name: String!department: Department!}type Department {name: String!}I can't model a use case where an employee must belong to a department.
(RAW Table : As above which has total 179 columns #41 contains numeric data 138 contains text data)I have ingested data in raw table….now I want to get only data of 41 columns which has number data type ?how this can be done in the transformation?similar query I want to run for remained 138 columns which has “Text” data type?
I want to read data from Snowflake and load it to CDF RAW. Could someone suggest what can be the best solution to implement this? Are there any specific extractors for doing it ?
Hello,I have been trying to load a CSV file into a table while going through the Cognite Data Fusion Fundamentals course. I have tried for over an hour to load the data correctly. One of the column headers adds 2 commas after the text and is throwing off the SQL formula. I am using the stock CSV file provided in the course, loading it straight from my desktop and not making any changes to the file itself.
As soon as I add a measurement in a checklist template and click on "Confirm" it is not saved. After opening the template again, my measurement input is no longer available. Am I doing something wrong here?All fields (name, unit, min/max value) are filled in.
Hello, I am working at HubOcean as a backend engineer and I need to access the logs for the API calls we make to CDF. Is there are procedure we can follow to get the logs?
Hello Everyone,Is there any row count storage limit for the instances which we store in container view in solution data model.Regards,Adarsh
I am able to browse ds-basics , cdf-fundamentals ….etc
We multiple are seeing 408 response code from graphql query api. There are multiple combinations of filters which are not working. For eg the below one returns 408, i think the problem is with the array length of filter entity.entityName. If the array length is 2 it works else it is not working.query MyQuery { listTimeSeriesRef(filter:{and:[{entity:{entityType:{in:["NAVIGATOR","SCREEN_CFG_REF"]}}},{property:{in:["TUBING_TEMP"]}},{entity:{entityName:{in:["Process.ReAllocation","Analysis Point by Desks","Downtime.GroupDowntime.DateTime.Subsystem"]}}}]}) { edges { node { property entity{ lastUpdated entityType entityName } } } } }
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.