Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Facing a Critical Error while running the below sql code.SQL Error:(Attached the Raw input Excel for your reference.)Text '9/28/2022' could not be parsed at index 0 | code: 400 | X-Request-ID: 14d6d486-1958-99a3-b552-d19e5937bf82 -- Loads SAP Work Order as events. Data is already contextualized with assets using functional location. An inner join with CDF Assets is used to ensure only work orders for assets already loaded is ingested.select concat('WO_', `Order Number`) as externalId, to_timestamp('Basic Start Date','M/dd/yyyy') as startTime, to_timestamp(`Basic Finish Date`,'M/dd/yyyy') as endTime, 'Work Order' as type, array(`Asset ID`) as assetIds, 'SAP PM' as source, `Description` as description, `Order Type` as subtype, 2395436788557957 as dataSetId, map_concat( to_metadata_except(Array( -- "Order Number", "Basic Start Time", "Basic Start Date", "Basic Finish Date", "Asset ID", "Description", "Order Type"), *) -- map("Priority",
I have a daily feed data of the form coming as a file in sharepointDate AX1 AB1 AW1 AN1 AR1 AB1 BW1 BG1 BT1 01-01-2023 0.23 1.23 2.54 0.98 0.655 1.09 0.87 0.34 0.19 01-02-2023 0.34 0.56 0.44 0.89 0.576 0.81 1.29 1.11 0.67 …... ….. ….. …... …... …... …... …... …... ….. I need to pull this data and transform it into an applicable resource-type (Sequences in CDF). I don’t want to store as time series since I need this in tabular form to operate further and do matrix and dot product etc. Is there a way to write a transformation to push it as ‘sequences’ so that I can use it as a matrix to perform further calculations?Please share some example SQL transformation snippets to achieve this.
Hi I have created a Cognite File Extractor from a local folder, and I want to create a log of all runs in a sub-folder. When changing the log level on the console and adding file as storage, nothing changes. from config file: logger: console: level: DEBUG file: level: INFO path: ".\\logs\\log.txt" //also tried with full path retention: 7 Are there some more steps to get the logger to change?I have run it both as local and admin. Hedda
Is there any way to define schedule for custom db extractor (based on cognite-extractor-utils) alike default extractor.
I am trying to run a code to fetch timeseries based on some tags available in a project. While I execute the same code using jupyter notebooks in CDF online-notebook feature, the code runs fine. When I am trying to run the same code script in local after setting up connectivity using interactive-login and then when I run the timeseries retrieve code, I am getting an error. Please help.Code:from datetime import datetime, timezoneutc = timezone.utcpi= client.time_series.data.retrieve_dataframe(external_id=['pi:2FC1898.DACA.PV','pi:2TC1066.DACA.PV','pi:LAB_133-X013_APIGRAVOB','pi:2FC1898.PIDA.OP'], start=datetime(2023, 1, 1, tzinfo=utc), end=datetime(2023, 5, 1, tzinfo=utc), aggregates=["average"], granularity="1d") Error- Traceback:---------------------------------------------------------------------------AttributeError Traceback (most recent call last)Cell In [9], line 5 1 from d
I have a lot of timeseries objects in CDF datasets. I have a particular set of timeseries tags out of the innumerable list of timeseries objects (16 of them) and for each tag, there are a bunch of sub-tags (4 of them). So, in total, I will need to maintain a hierarchy of 16 tags and each having 4 tags and the overall total of 64 tags. I need to go and retrieve the datapoints for each of those 64 tags. So how do I store my desired list of timeseries tags along with their child tags within CDF. Where do I store them and maintain them. This list may be edited and needs flexibility to be edited based on business users need. IT is completely the enterprise choice. Please share complete steps simulate them in CDF. This is actually to be done for yield-tracking analytics and all these tags corresponding to the yield groups/products in a refinery.
i need support for the course Python SDK Transformations.Im running the comands from notebook that was given in git and a stoped on:result = client.transformations.run(asset_transformation.id, wait=False)The error resulted is :CogniteAPIError: Invalid source/destination credentials: Could not authenticate with the OIDC credentials. Please check your credentials. | code: 403 | X-Request-ID: 78e0e54a-a3ca-9bc4-b2f5-e303ed6a7207iv checked the authentication process and i can perform all task like search list add datasets assets and everything but i cant run the transformation.Any one knows how to solve it ?i have the same problen in the UI
Hi, I wanted to understand if Cognite Data Extractors can stream real-time data into CDF (with or without delay) for timeseries analysis , or do we need to schedule and extract a specific number of rows every time we run the extractor?
hello Community,Can someone please share the approach (along with code) for extracting files from sharepoint online (xls , with multiple worksheets) and extract the content and load them as RAW tables in CDF. Is there a direct feature available in Sharepoint file extractor that does this job? Should we use SDK to extend the file extraction and then read the content and insert into tables?. If we have a large set of files and each containing multiple sheets, it can be hard to process all of them dynamically. Please advise.
hello community, can anyone tell me is there any way to upload .csv files to CDF using Python SDK.
Do we have any api in cognite which will help to show preview data of transformations?
I am currently working through “Learn to Use the Cognite Python SDK”. I am unsure where the workspace/code is provided so that I am able to take the final the test and succeed in this course. I was originally using a colab research google file that includes prompts. But the questions on the final test did not corelate with this file. Can you please instruct me on how I find the correct files/workspace. Thank you in advance.
Hi, There are raw data files which are used to perform computations (yield tracking measures ) that uses tons of input data attributes to compute the metrics. Currently, VBA macros (excel) is used to perform the calculations and publishing the outputs. WE have to transfer the computation within CDF. Can we use cognite functions in this regard?. Is there a way to create python script that can perform these calculations within the CDF. If not, should we create some external cloud based construct like Azure functions or Lambda routines?Please share some examples are snippets that can help to perform this project task.
Hello Team, Now the Python SDK supporting the cursor parameter or not? Regards,Ayushi.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.