Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hi @Glen Sykes I am doing a use case called “Yield tracking”. Basically, I get lot of inputs and calculate the mass-balancing, the assays normalization, Diet feed, and then doing a dot product of these last two data elements and then use the output to compute swing% (SW%) values based on the type of the crude output (like liquids, gases products) and use those SW% slabs to compute the derived yields for some 500-line items in an assay matrix. These processes have to be done on a daily basis based on the data feed and then stored appropriately with outputs in CDF so that cognite charts can be plotted using them. I have around 137 charts and most of them have three items in it (like Linear, Non-linear and Actual). I have done all these steps as a POC to master this flow in a local jupyter notebook using flat files (csv) and data frames in Python. I also plotted them using matplotlib.Now, I have to translate all of them in cognite using CDF functions, sequences, timeseries and then use to
Thanks @roman.chesnokov @Håkon V. Treider It worked. I ran the upgrade command pip install --upgrade cognite-sdkin a project specific folder and when launched the notebook from that folder, it worked.Should I add this SDK in my PATH variable so that it runs for any notebook in any folder?
Thanks @roman.chesnokov for the inputs. But is it okay to store the timeseries tags as ‘assets’?Will that be good? I thought only asset information is only to be maintained as ‘Assets’.Here, I have around 343k timeseries objects in total, but I wanted to maintain a list of 18 parent tags and each of those having 4 tags attached to the parent.
Thanks @Gaetan Helness for the inputs. Please could you share details on this step. “client IT team to add the app registration to all the different groups in the Azure AD” Is this the same as adding service principal to the AD group.
Thanks @Everton Colling for your response. I will explore and see for early adopter program. Meanwhile, I wanted to get my data model construct for this case clean so that I can approach to develop the solution. Are there some sample data models available so that I can have a look at it for quick reference?I can understand your point about not sticking to raw tables but to use clean CDF resource types. For instance, where do I store the Crude Assays input data in CDF, Diet Data (Crude composition data (for various crudes and its index values), mass-balance data etc. I am not sure of the final destination schema to store these data values in CDF targets. Should I be using CDF resource type - sequences here?Please advise.
Thanks @roman.chesnokov . Could you please share any reference material that has all the technical details provided. (using GH actions etc). I can note that the main documentation site shares the details in some abstract fashion. I am looking for something that has the necessary details like in the boot-camp documentation.
Thanks @mathialo . I have two environments, DEV and PROD. So, should I manually do the same steps for extraction for PROD as well separately? Can I setup Extraction pipelines using Git actions so that we can automate these steps and not do these steps manually in the respective environments?
Thanks for the inputs @mathialo . So, I don't need to invest into doing any custom coding while using prebuilt extractors. I just need to configure with the right parameters and the data extraction can be accomplished.
Thanks @mathialo for your inputs. Please could you share monitoring guidelines and means of handling the DataStream if there is any hiccups in the PI server connectivity etc. How does the extractor resume from where it stopped / halted. How to handle those scenarios in the extractor scripts.
Thanks for the inputs.
Thanks @roman.chesnokov for the inputs. I also see that the excel has lot of formulas in several columns in the workbook (with around 30+ sheets). I am wondering how to extract all of these individual computation in each of those columns and then use it as cognite-functions. Maybe, I am thinking if I should try setting up RAW tables and then use extended derived columns to make these computations as a part of transformations.
Illustration for implementing the Macros computation within CDF
Great! It worked now. I added the ipykernel and then ran the above command. After launching the jupyter notebook, I could see the new env with <myname> showing up in the notebook.Thanks a lot!
Thanks @Kristian Gjestad Vangsnes . I can see the environment is activated (as shown below in BLUE). But when i try to install ipykernel directly, it is giving the following messages (highlighted in yellow). I am trying to clone the extractor-utils repo and work to create a custom extractor in my local.D:\pycdf\python-extractor-utils-master\python-extractor-utils-master>poetry env listcognite-extractor-utils-yeiQVxSK-py3.9 (Activated)D:\pycdf\python-extractor-utils-master\python-extractor-utils-master>poetry run python -m ipykernel install --user --name eashwar_nC:\Users\eashwar_n\AppData\Local\pypoetry\Cache\virtualenvs\cognite-extractor-utils-yeiQVxSK-py3.9\Scripts\python.exe: No module named ipykernel
Thanks @Thomas Sjølshagen . When I run SQL transformations with Destination as ‘Assets-Hierarchy’ or even ‘Assets’, I dont see any placeholder to map for ‘geoLocation object.
Thanks @Eniko Farkas . I guess this course is having limited availability and not there for enrolment.
Thanks @Eniko Farkas for the response. I am doing this independently to further my knowledge in CDF.I managed to upload a PID file in ‘Explore Data’ but I am unsure now how to create the tags which are needed to be approved in the ‘Interactive diagrams’I created labels by editing the file and linked those labels / tags to the Assets. But still the file has no tags. Please help.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.