Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
We have a unique use case where there are data points collected from PI server for refinery plant and alongside, we have other refinery data like Crude assays, Diet, Mass Balance. All these data points (in tabular format) are collected and then some formula is applied across to get properties (like swing cut %, CBLISS, Vol%, etc.) and some derived tables are created. Then some input feed is made from the actual data sent to a Petro-Sim tool (math simulation tool/modeling) and then output from that tool is collected and stored. All these data wrangling is done and finally charts are finally made for some 50+ crude product variants to measure yield tracking (Actual data, non-linear model data, linear model data). As per design, we are planning to use the CDF raw tables and then do all the computations and create derived raw tables to fulfil the purpose. Then use Petro-Sim connector and then store the data output from the model tool as well in raw tables. So, we don't tend to see the typi
Hi, There are raw data files which are used to perform computations (yield tracking measures ) that uses tons of input data attributes to compute the metrics. Currently, VBA macros (excel) is used to perform the calculations and publishing the outputs. WE have to transfer the computation within CDF. Can we use cognite functions in this regard?. Is there a way to create python script that can perform these calculations within the CDF. If not, should we create some external cloud based construct like Azure functions or Lambda routines?Please share some examples are snippets that can help to perform this project task.
I have cloned the following in my local DEV folder.using-cognite-python-sdk.gitI have also installed the dependencies that are defined in the pyproject.toml in the repo.To run the code in jupyter notebooks. (Note: Change the "Kernel" to use the virtual environment created by poetry.) and add new libraries as needed. How do I change the Kernel in jupyter notebook to the one created by poetry? When I open the jupyter notebook from the repo folder, it shows the general python3 (ipykernel) but this was present even before I installed poetry. So how do I change the kernel in my notebook. Please advise.
I have a set of work orders with the following schema along with sample data for two assets (10010234, 10010235){Equipment Work Order # Order Type Start Date Start Time End Date End Time Cost}10010234 110025063 Unplanned 7/6/2018 11:31:15 7/6/2018 23:00:00 35,600 10010234 110026082 Planned 8/25/2018 12:27:15 8/27/2018 23:44:00 37,180 10010234 110027101 Unplanned 12/8/2018 13:30:15 12/8/2018 17:00:00 26,580 10010234 110028120 Unplanned 1/27/2019 14:40:15 1/29/2019 1:45:00 38,900 10010235 110023050 Planned 3/1/2018 15:57:15 3/2/2018 3:02:00 25,000 10010235 110024617 Planned 6/14/2018 17:21:15 6/16/2018 4:30:00 30,000 10010235 110026184 Planned 9/27/2018 0:25:00 9/27/2018 6:09:00 24,600 10010235 110027751 Unplanned 1/10/2019 20:30:15 1/12/2019 7:59:00 35,600 Equipment is stored as assets and the work orders are stored a
As a Data engineer, I would like to store the geolocation attributes (lat/long) against the assets and would like to visualize the assets in Data explorer overlayed on GIS or maps application
I have ingested RAW data into CDF for a bunch of equipment (16) and also applied transformations on those to setup Assets, Timeseries, Datapoints and Events. I have a P&ID diagram (PDF) which has 4 Equipments shown in the Diagram with P&ID#. I would like to know the process of ingesting this file as a resource type for this equipment setup and where do I upload the file? When the user selects any of the above Equipment in CDF, the system should display the enclosed P&ID diagram.
While trying to setup the Google - colab for CDF environment and authentication, i am getting an error. Unable to trace the rootcause of this error. This code is a part of Notebook setup given in the Hands-On course Link to notebook - Data processing and analysis for IDA course.ipynb - Colaboratory (google.com) TypeError Traceback (most recent call last)Cell In [2], line 57 55 def get_token(): 56 return authenticate_device_code(app)['access_token']---> 57 client = CogniteClient( 58 ## token_url=f'{AUTHORITY_URI}/v2.0', 59 token=get_token, 60 token_client_id=CLIENT_ID, 61 project=COGNITE_PROJECT, 62 base_url=f'https://{CDF_CLUSTER}.cognitedata.com', 63 client_name='cognite-python-dev', 64 ) 65 print(client.iam.token.inspect())TypeError: __init__() got an unexpected keyword argument 'token'
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.