Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
I’ve provided what should be a very clean visual explanation of the issues that I’m having. I’m trying to add events to my chart that indicate the run status of each pump in a system. I know the equipment tag of the pump I’m looking at and I have done manual CDF searches to find that the tag has events over the time range that I’m looking at. However, the query within charts returns no events. When I query the sister pump’s events, it identifies only 2 of the 3 events in the time frame, useful, but not enough to do what I need to do.
I’m working on a use case for generating a list of infield tags, sorted by which ones have been outside of their range the most times in the past x amount of time. There are 2 key problems with implementing this use case:When a limit is updated in Infield, the metadata for the time series in CDF is updated, but records for any past limits, and the times that those limits were modified is not stored in CDF. So I will get false values for the number of times out of range if I’m looking at any time period where the ranges were different from what they currently are. The only way then is to check all past infield reports which is laborious and seems unnecessary. Text-based inputs lack any criterion indicating whether they are out of range in CDF. For some simple rounds where the in range value is “Normal” it is easy to get something working, but there are others that this doesn’t work for. Essentially despite numerical and text values being communicated to CDF, the in/out of range data rep
Just going to post the full warning. Is this something that is wrong with my installation (I updated the SDK a month or so ago)? Or something that is on the SDK side of a to be implemented feature? I’m curious because it says that this is causing the data fetching to run much slower than it can. ~\Anaconda3\lib\site-packages\cognite\client\_api\datapoints.py:755: UserWarning: Your installation of 'protobuf' is missing compiled C binaries, and will run in pure-python mode, which causes datapoints fetching to be ~5x slower. To verify, set the environment variable `PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp` before running (this will cause the code to fail). The easiest fix is probably to pin your 'protobuf' dependency to major version 4 (or higher), see: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.