Skip to main content
Solved

Did the sampling frequency changed over the years?


I'm working on identifying the falling and rising edges of the VAL_23-KA-9101-M01:HSI.StatusMotorOn signal. For that and shifting the TS and rest the values, if I get -1 is a falling edge and for 1 it is a rising edge.  For the first row it works fine: 

 

But I’m also getting a lot of values different than 1 or -1. Especially for more recent years. I guess that the problem is at the aggregating step, since I downloaded the data at a 1m frequency I get not int values for some records. Now, I was trying to download this TS at a 1s frequency, but I get a different number of records depending on the year.  E.G. for 2014 I get 34 records for a 10 days time windows with a granularity 1m.

 

 

If I do the same but for 2020 I get 1411, much more than for 2014, but still a tenth of the expected count.

 

 

If I change the granularity to 1s, I still get the same number of records, but with more precision in the time stamp: 
 

 

Considering this, is there a recommended granularity to download the data and minimize the distortions in the digital signals?

 

Best answer by matiasholte

As described in the documentation, we skip periods with no data. Since the data points arrive approximately every 10 minutes, there should be an aggregate value every 10 minutes (unless the granularity is higher). There are 1440 10-minute intervals in 10 days.

If you drop aggregate/granularity parameter, you will receive the raw data points, undistorted. These will also contain the exact timestamp, not rounded like the aggregates.

Furthermore, by asking for raw data points, you don’t risk taking the average of two different values, which could easily give a different value than 1 or -1.

If you need to aggregate to get the data points at regular intervals, you could also consider stepInterpolation, which is the value of the last data point before (or at the start of) the aggregation interval. Then you will always receive an actual value, but you may miss rapid changes within the interval.

View original

  • Backend developer
  • October 11, 2022

As described in the documentation, we skip periods with no data. Since the data points arrive approximately every 10 minutes, there should be an aggregate value every 10 minutes (unless the granularity is higher). There are 1440 10-minute intervals in 10 days.

If you drop aggregate/granularity parameter, you will receive the raw data points, undistorted. These will also contain the exact timestamp, not rounded like the aggregates.

Furthermore, by asking for raw data points, you don’t risk taking the average of two different values, which could easily give a different value than 1 or -1.

If you need to aggregate to get the data points at regular intervals, you could also consider stepInterpolation, which is the value of the last data point before (or at the start of) the aggregation interval. Then you will always receive an actual value, but you may miss rapid changes within the interval.


Forum|alt.badge.img

Regarding the ingestion and the sampling frequency there, this is not something we control as part of Open Industrial Data, the data is replicated from a data source we do not control, so it is very possible that something has changed over there


Reply


Cookie Policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie Settings