Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Yes, talked with @rmaidla and as I understand they are also after that. I think right now, the view above should at least make it easier to keep the limits/setpoint up to date although it’s not automatically updated. Having it automatically updated based on some calculations makes sense to do as well, but I think before we can enable something like that we have to enable a way to keep track of the historic limits, as pointed out by @Brendan Buckbee in the attached hub post. I would assume that when you are analysing the data, and the limit keeps changing, it is important to always know what the limit was in the timeframe where you are analysing the data? I think for dynamic infield points we would need to capture the transformation done on the data to set the range, and the point that is used for the transformation. Store that info rather than a constantly updating range. If you change a transformation from ex. System Pressure ± 5psig to System Pressure +-3psig, that calculation
Celanese has similar use cases and would be very interested in seeing a solution to this.
@ibrahim.alsyed This appears to be a data quality issue with the PSI data. We can’t have multiple instances of the same alarm active at the same time. Every alarm except potentially the current one should have an end date.
Yes, I installed a protobuf version >4 and am no longer receiving the errors. I’m not sure why I ended up with <4 to begin with as I’m not on an M1 mac like Hakon suggested.
Celanese - Feature Request - Nice to Have - Vessel Volume CalculationsVessel volume calculations would be very useful to have given a level time series and some basic inputs about the vessel. Horizontal Vessels in particular are time consuming to set up and lead to overly long calculations (prone to error) in PI in order to get an accurate volume on the vessel. In our industry the 2:1 elliptical heads are almost always what we use in our vessels so having a calculation just for those would help a lot. Eventually it would be good to be able to select a vessel head geometry and get accurate volume results in charts that way. Here’s a resource for the calculations that we usehttps://neutrium.net/equipment/volume-and-wetted-area-of-partially-filled-horizontal-vessels/Thanks,Brendan
Hi Charts Team,In my previous experience, if a calculated time series contained too much data for Charts, it reduced the number of points in that time series to be presented in the chart. Now the behavior appears to be different. I receive what I remember to be the same error message about an excessive time span and fewer data points. However, the behavior is now that the time series does not span the length of the chart and appears to cut off when it has exceeded the available count for data points. This is making a number of charts unusable for me with limited time ranges that aren’t particularly useful for viewing data. Thanks,Brendan
Celanese - Feature Request - Nice to Have - Log Scales for Y-axisHoping that this is a simple implementation. It would be great if we had the option to toggle a time series’ y-axis to/from log scale without making a calculation to perform a logarithm transformation on the data.
I think there might be a misunderstanding about the sample parameter. The simple moving average has a “Minimum sample” parameter - if a window has < this number of data points, the result of this window will be “Not a Number” (which in practice means that the result for this window is not shown in CHARTS). In other words: setting “Minimum samples” to a high value means that you will see fewer results (and maybe no results at all depending on the window size and the density of the data points). This was why I was attempting to use a fairly large number for this calculation. I would like to ensure data quality since it’s not a simple calculation where I will visually see something being off. The points in PI typically have 30s or 1m scan time, so I assume the data should be available in Charts. However, if I try something like 8760 (24*365), I don’t get any results. This concerns me a bit because if it doesn’t have that many points it can do calculations on, it may not be very accura
@Brendan Buckbee interesting, this isn’t something I’ve heard about before. Couldn’t this produce difficult-to-read visualizations if the time series scales are very different? E.g. A single y-axis comparing a time series that fluctuates between 2-3 bar vs. another that fluctuates between 60-90°F?Regardless, this is a simple extension of the y-axis settings we currently have in Charts and one I suspect would be quite trivial to implement. I’ll add it to our backlog. Yes, it can produce difficult to read visualizations. I’m personally not a big advocate of using it, but there are many people within our company who are highly accustomed to viewing things that way as it’s the default option in the historian.
I’m not sure why that calculation is failing when you have >100 samples, we’ll have to dig into it to figure out why. My first hypothesis is that it might be due to the combination of the automatic data alignment (resampling and reindexing) + downsampling + the gaps in the original time series. We’ll keep you updated as we know more. @Gustavo Zarruk, @Simon Funke, @Rhuan Barreto, FYI. When it comes to the BTUs to Flare (green) calculation failing to produce more than a few data points, I figured out this is due to the automatic data alignment feature being toggled Off in your calculation. If you turn it On, then it will produce much more reasonable results: It’s quite interesting to compare this BTU to Flare calculation using the inputs with gap filling compared to the previous results without gap filling. But this looks like it resolves the issue. In general, this is a good example of why and how data alignment (resampling and reindexing) is always important when working with ti
Celanese - Feature Request - Important - Map “Bad” to “nan” on Data Ingress & Improved ReplaceI have tried to use gap detection in order to replace bad values on transmitters as they come into cognite. However, I have encountered two issues. First, if the instrument is still bad, gap detection seems to return nothing. Secondly, threshold gap detection has been inconsistent depending on the time constant used for checking for the gap. For example, using 6hrs as the threshold has resulted in no gaps being detected while a 1day threshold detects gaps. This is counterintuitive because if I have a 1day gap, I certainly should have a 6hr gap. For Celanese, I think it would be much more efficient to just map “Bad” to nan. It is consistent across our site and other Celanese sites that when there is a transmitter issue, it will read “Bad” in PI. This will make the calculations simpler and more consistent as we could just use the Replace function. Coupled with a modification to the Replace f
Eric,I had set the minimum samples high because I was hoping that would ensure limited downsampling of the data. 100000 data points for a year of data means I’ll be getting 5 min quality data for the calculation. Unfortunately I can’t seem to get the calculation to work with any number greater than 100 and I’m curious as to why that is.I have started to use the gap detection in the calculations and have run in to some unexpected issues. The gap detection seems to work okay to replace bad values on the pressure transmitter, and the results on rolling average BTU and Yearly BTU calculations both look good. However, the source of those two calculations (BTUs to Flare) looks like it only has two data points.BTUs to Flare (Green Line) appears to have nearly no data points while the calculations that use this time series have expected smooth curves.Thanks,Brendan
Feature Request - Nice to Have - Single Scale This is technically a nice to have, however, it is the default option in PI and so many people are very accustomed to viewing unit data in this way. Having this functionality will help with adoption significantly. By default a PI chart has a single y-axis scaled based on the highest max and lowest min of all time series on the graph. This makes apples to apples comparisons between different time series easier as it becomes a simple visual check. Just a clarifying question, how does a “single y-axis” work when you’re working with time series using different units? Or, do you mean there is a single y-axis single scale per unit? When time series are added to a plot in Charts, the time series should auto-scale to the min and max values for a given range. If you want to return a time series’ y-axis scale to that min and max range, you can double click on that y-axis to quickly snap it into position. We do have the “Merge y-axis”
Looks like some good features got implemented! We like being able to specify our units manually as a hotfix until the larger set of units and unit awareness is ready. Also the Equipment & Pumps calculations added from the InDSL are all very useful to us for troubleshooting & condition monitoring. More of these types of built in functions would be a plus on our end.
Hi Eric,I appreciate the information on the moving average types that are available. I have attempted to use the simple moving average, but again over the duration of interest (1 year) the chart still will not show any data. I do get the error message for down sampling but also never see an actual result on the chart. The chart should be public if you would like to take a look.In general I would like down sampling to be configurable. Something as simple as it allowing down sampling by default but it can be toggled off for a specific chart or time series would be enough I think.Thanks,Brendan
Feature Request - Important - Time Ranges Relative to CurrentThis is a very useful feature for trending unit data. In PI, setting the end time to ‘*’ means that the trend will end at the current time and update its trends accordingly as new data comes in. Similarly, setting the start time to ‘* -xy’ means the trend will start at current time minus some offset where x is an integer and y is a string representing a time unit (ex: s, m, h, d, mo, y, sec, min, hour etc.). This feature makes monitoring the real time process a lot easier since the chart automatically updates to new data and removes old data. It doesn’t have to replicate the syntax of PI of course. I think it’s efficient syntax but the functionality however best implemented is what I’m after
Feature Request - Nice to Have - Single ScaleThis is technically a nice to have, however, it is the default option in PI and so many people are very accustomed to viewing unit data in this way. Having this functionality will help with adoption significantly. By default a PI chart has a single y-axis scaled based on the highest max and lowest min of all time series on the graph. This makes apples to apples comparisons between different time series easier as it becomes a simple visual check.
Feature Request - Nice to Have - Time Range EntryEspecially with longer range charts, it would be nice to be able to easily enter the desired time range in number format (ex: 04/15/21). Currently I’m able to enter a range manually by typing in the 3 letters for the month name, but I think numerical is easier. In addition, there is an issue with manual entry as is (ex: Apr15,2021 interprets to Apr 5, 2021, as well as Apr25,2021). Using the calendar is slow and inefficient when using long time ranges.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.