Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
SQL Transformations error. Please report this error to Cognite. Job id: 91084409-b4e9-4e7c-b872-d1d26dcc2459 (cogniteId: 43334402) SQL Transformations error. Please report this error to Cognite. Job id: 7b2d398b-4234-46a9-a377-95400ae3af48 (cogniteId: 43334400) SQL Transformations error. Please report this error to Cognite. Job id: ff2d0960-b2df-4031-a48a-d2bc4b03e3c5 (cogniteId: 43334372)
How to upload excel files to CDF files, I am looking for ways to upload files programatically and schedule this using cron expression.
We’re attempting to execute 10 concurrent jobs, and each one will create approximately 20 Timeseries objects, immediately ingesting datapoints for these newly created Timeseries objects in a loop.However, if we try to run 500 similar jobs with 10 concurrently running at a time, about 10% of the jobs fail due to a ‘Timeseries not found’ error.We theorize that the creation of a Timeseries object can take some time. Therefore, immediately ingesting the datapoint might fail because Cognite doesn’t refresh quickly enough to find the corresponding Timeseries object. This issue only occurs when we attempt to run multiple concurrent jobs.Has anyone else encountered similar issues?
Hi, we are using the PI Extractor with Extraction Pipeline and is pushing the config through the pipeline.But we noticed that after updating Config in extraction pipeline we have to restart the PI Extractor to force it to use the updated config.If we update the config we can se in run history that the extractor is reporting “Seen” on the new config revision but it is not loading and using the new config.Is there a way to force it to use new config?
Hello,I have been trying to load a CSV file into a table while going through the Cognite Data Fusion Fundamentals course. I have tried for over an hour to load the data correctly. One of the column headers adds 2 commas after the text and is throwing off the SQL formula. I am using the stock CSV file provided in the course, loading it straight from my desktop and not making any changes to the file itself.
I am looking for function/query that can search in JSON type data against array of values. Json can have nested keys. I tried using JSON_SEARCH or JSON_CONTAINS . getting Undefined function:JSON_SEARCH(data->'$[*]', JSON_ARRAY("abc")) Is there a way to search in JSON object keys for list of values ? for e.g :JSON col data {"name":"Back","address":{"location":{"country":[{"city":{"state":[‘jkl’,’fgh’]},"type":"home"},"empId":"e100043867","FullName":"Back"}JSON_ARRAY = [‘name’,’state’,’country’]check if JSON_ARRAY values exists in JSON data ?Any help would be appreciated.
We have data model withtype abc{id: Stringkind: Stringname: Stringdescription: Stringkey: String}In other table , we have these columns stored as column valueskind attributes 1.0.0 name 1.0.0 description 1.0.0 key 1.0.0 name need query to join on column value(stored as rows) from one table with solution model table data can group on by id or kind? As there can be n number of attributes, if we can do it dynamically to match column value with column name.Any help would be appreciated
Hi I have an application to read data from and I have two options either to use ODBC or OPC UA directlyWhat are the benchmarks or limitations when it comes to aboveI have 30,000 data points to read with sample time of 1s and 5s and 10s
what is cybersecurity or data governed for moving data from Datasource in customer site to CDF ?
Hello Community,cdf stores the data in cloud, could anyone know what is the database / database structure being used,and any reason for using Spark SQL for transformations.
Can someone revert with proper procedure for using any of the cognite data fusion API using postman
Hi Team, From the cdf-fundamentals training, I have mapped required details but data is displaying as a linear format not trends.
Hi, I have done the Grafana setup from azure managed grafana from azure portal. But don’t have permission to install cognite data fusion in grafana from azure. Please let me know how to install plugin and what permission needed.
I am getting below error while executing Documentum Extractor. FATAL: Invalid YAML file: Both database and table must be set when metadata destination is RAWcom.cognite.connector.dctm.config.InvalidConfigException: Invalid YAML file: Both database and table must be set when metadata destination is RAW. I have gone through the cognite documentation for possible configuration, it is not mentioned in the documentation. There is no parameter for metadata_destination, database, table. Please help me with details, from where we can get this parameter info? or how can I configure it in YAML? Please find my config.yaml file contents as below: version: 1logger: console: level: INFOcognite: # Read these from environment variables host: #${COGNITE_BASE_URL} project: #${COGNITE_PROJECT} idp-authentication: # token-url: ${COGNITE_TOKEN_URL} tenant: #${tenant} client-id: #${COGNITE_CLIENT_ID} secret: #${COGNITE_CLIENT_SECRET} scopes:
Hi Team, Greetings of the day!!I’m learning the cognitive charts, but I’m unable to find the cognitive chart neither in Explore nor in All category.Can you guide me to find the navigation?Regards,Navin
Below is the screenshot of error , during running transformation. Below is the Query used to transform, Preview is working fine. But During the Transformation Run, it is giving above error, saying start time values cannot be greater than endTime values. But According to Query it shouldn’t be the case. plz suggest solution. Attached Sample Data(data sheet) For your reference.select `Notification_no` as externalId, `startTime` as startTime, `endTime` as endTime, `description` as description, `type` as type, 1118188859138991 AS dataSetIdfrom(select concat('NO_',QMNUM) as `Notification_no`, --to_timestamp(QMDAT,'M/d/yyyy') as `startTime`, CASE WHEN RIGHT(MZEIT, 2)='AM' THEN TO_TIMESTAMP(CONCAT(QMDAT,' ', SUBSTRING(MZEIT, 1, LENGTH(MZEIT) - 3)), 'M/d/yyyy h:mm:ss') ELSE TO_TIMESTAMP(CONCAT(QMDAT,' ', SUBSTRING(MZEIT, 1, LENGTH(MZEIT) - 3)), 'M/d/yyyy h:mm:ss') + INTERVAL 12 HOURS END AS `startTime`, CASE WHEN isnull(QMDAB) THEN
Hi there!I have a usecase where a file is uploaded by a user to an API. The API then uploads the file to CDF Files. We want to avoid having to have the full file in memory at the same time, and therefore must stream the file contents from the request handler directly into CDF Files.There are two ways of achieving this:Stream the request body from the request handler directly into CDF Files’ upload URL Chunk the request body and upload each chunk as separate requests.The first option may be achievable, but I don’t believe the second option is possible.Do you have any insight whether it is possible to chunk a file upload like this in CDF Files?
Hello! I have 3 quick questions that come to mind:From your perspective, what is the 1 sentence value statement of CDF?How can we justify all the manual work required to prepare the data to ingest into the platform?How do our customers save money by using CDF?
We are using the online version of the Jupyter notebook from CDF portal for a client project - DEV and able to get the clientconfig/ client object and create and retrieve assets, run transformations, create datasets etc. Client IT team has created an app and registered in Azure and also shared the tenant ID, Client ID / name and secrets as well. When we use these parameters shared for this app and run the same code locally in a notebook, it is not able to perform certain tasks (such as data set creation etc.). Basically, the online version has all the IAM groups as {data engineer, data scientist Data Analyst, OIDC-Admin.}But when we set the configuration parameters client-ID, Tenant and secrets etc., we don't get the groups entirely as above but only comes as “Data Integration”. This “Data-integration” has limited scope and doesn't allow to create datasets etc. So how do we understand this part of roles and access management in CDF construct and applications registered in Azure AD?
I have a time series data identified with TAGS and that can contain around 1500 to 5000+ records generated per day. I would have to perform a time weighted average and calculate the time-weighted value for the time-series data for the given times. How do I proceed to recreate the computation in Cognite since I got the PI data already sitting within CDF. Basically, got to recreate this function of OSI PI inside CDFPIAdvCalcDat(tagname, stime, etime, interval, mode, calcbasis, minpctgood, cfactor, outcode, PIServer)
Hi team, In Hess team came up with one request from Documentum Extractor, please refer below context for the request. Currently we have Cognite connected to Documentum via the raw folder, which pulls in the raw file format from EDMS. We actually need to connect to Documentum's rendition folder where the PDF versions of all the files are saved. Cognite can only effectively contextualize PDF's, so we need to connect to the render folder directly. We contacted the documentum team, however they've said that only people on the Documentum team can connect to that folder. We just need the Cognite extractor to have access to that Rendition repository. Please let me know if you need more information. project - hess-dev, hess-us
Hello,I am quite new on usinf RestAPI. My company has the license to use CDF. I also inderstood that Cognite has developed dotNet RestApi SDK.I searched a bit but I could not find an example of how we can use this SDK for data retrieval especially for time-series data. How could I get a snippet of code showing an example how to use this?Thanks.
Hi Team,I am facing one challenge in creating Asset hierarchy in Power BI. Format of data is not as per the required format for hierarchy creation i.e. i do not get parent-child relation in different columns. However i see all nodes appear in single column. I do have ParentID but unaware how this will help me to form hierarchy. Any inputs will be appreciated. Thanks!Regards,Arati
(RAW Table : As above which has total 179 columns #41 contains numeric data 138 contains text data)I have ingested data in raw table….now I want to get only data of 41 columns which has number data type ?how this can be done in the transformation?similar query I want to run for remained 138 columns which has “Text” data type?
Executing transform from Raw/Staging to Dataset Events. Codeselect cast(`uniqueid` as BIGINT) as id, cast(`Start Time` as TIMESTAMP) as startTime, cast(`End Time` as TIMESTAMP) as endTime, 1626362640169782 as dataSetId, concat("Chem-Batch", "-", Unit, "-", `Batch ID`) as description, 'Process' as type, 'Batch' as subtype from `Chem-Batch`.`Batch-Unit`where `Start Time` > '2023-06-13'Preview shows expected 3 row results: Run yields this error:Request with id 90f5d941-7f0e-9553-9ca5-317d784434d1 to https://az-eastus-1.cognitedata.com/api/v1/projects/ra-istc-sandbox/events/update failed with status 400: Event id not found. Missing ids: [48, 49].It seems all columns data in Raw rows has appropriate data.Any ideas are welcome.Chris
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.