Good morning Glen!Sure, I’ll be happy to share what I find from my small investigations. I’ll keep in touch!
Thank you for the quick reply!Firstly, I was not aware that the CDF aggregation was a time-weighted average, thank you for mentioning this!If I shift the start time as you showed, I still get a shift in the CDF data features compared to PI. Here is a screenshot:CDF vs PI: Start time in CDF fetch offset +0.5 units compared to start time for PI fetch. Shifting the CDF data by 0.5 units the features overlap much better:CDF vs PI: CDF index additionally shifted 0.5 units to get better match.However, this leads to the timestamps no longer being equal for the CDF and PI data. It is not a huge issue if I just compute comparison metrics (RMSE, max errors, percentage errors, etc), but it is extra details that need to be documented and explained.Perhaps there was something I misunderstood by your example. For reference, here is the fetch code:cdf = client.time_series.data.retrieve_arrays( external_id=get_ts_external_id_from_name(name=ts_name, client=client), start=start_time + 0.5 * pd.Tim
It might be a good idea to store the cache in the project directory. Cache location can be set in arguments to the OAuthInteractice class in the cognite sdk. It will perhaps be easier to remeber to delete it when the auth fails if it stares you in the eye.
… and to make full use of the CDF contextualization, add visuals for document previews and 3D models. All data types could have their own visuals.
I would like to be able to control both the time window and the time series granularity in the specified time window. For example, I would like to specify a time window of “1year” and then specify a time series granularity of “12hours”. Indeed, if you put a granularity of 30 seconds on a time series spanning 10 years with 350 million data points, then But my vote is for flexibility over restriction. The functionality could be hidden in a way to avoid accidental settings, or display an “Are you sure” box if the number of data points returned by the request exceeds a certain threshold.:)
To be clear:I click the stack button, and the graphs are stacked vertically. When I scroll on a y axis the limits change, but once I move the cursor onto the graphing area the limits are reset to what the stacking algorithm decided. Sometimes I want to change the y axis limits while in stacked mode, but this is not allowed currently. I hope this clarifies my suggestion.
@Anders Brakestad are you referring to having control over the granularity for time series, calculations, or both? For time series, this would obviously be only for visualization purposes whereas calculations could be for both visualization purposes and/or for specifying the granularity of the calculation workflow, itself (something we are aware of as a high-value request to make calculation aggregation more predictable and trustworthy). And what sorts of granularity parameters (seconds, minutes, hours, days, etc.) are you most interested in or expect to use most frequently? It is difficult to say which granularity will be most used. It depends on the use case. I guess “All of the above” will be relevant in data exploration. My point is that you give the user more control of how the data is visualized. Would it not be possible to allow for arbitrary resamplings of the frequency by letting the user specify “1hour”, “14days”, etc?
Ahh, it was the df.tz_localize("UTC").tz_convert("CET")pattern I was not able to come up with on my own :) Thanks! I’m trying to be very careful about the tz conversions, hence my question on this site. Thanks for the help!Applying the tz conversion does align the Pi and CDF DateTimeIndex-s, apart from the extra timestamps present in the Pi DataFrame. If I drop those, then they match 100%. I suppose there is something with the ingestion pipeline that leads to some timestamps being missed…?
Thanks. I looked at the guide, but I must admit I am not sure how to use it to help me now 😅Am I correct to assume that all dates coming from the Cognite API are in UTC? And that it is not possible to pass timezone information to the API so that the returned dates actually correspond to what I actually was after?So whenever I fetch a DataFrame from CDF I need to do a shift by one hour to manually convert from UTC to CET?
When I use this an interactive log in window pops up - is this what you refer to? Then it caches after.
And a similar class based on the device code workflow:from cognite.client import CogniteClient, ClientConfigfrom cognite.client.credentials import Tokenfrom msal import PublicClientApplicationfrom publicdata_credentials import credentials as credsclass OIDCDeviceCode: """Convenience class for authentication toward CDF with Open ID Connect device code. Based on the following exmaple: https://github.com/cognitedata/python-oidc-authentication""" def __init__( self, tenant_id: str=None, client_id: str=None, cdf_cluster: str="api", cognite_project: str=None, client_name: str="" ): """ Initialize authenticator class. Args tenant_id : The ID for the CDF tenant. Type: str client_id : The ID for the CDF client. Type: str cdf_cluster : The name if the CDF cluster. Type: str. Default: api cognite_project: The name of the cognite project. Type: str client_n
I have made a couple of convenience classes based on the OIDC examples given here. Perhaps it can be useful to you. import atexitfrom pathlib import Pathfrom cognite.client import CogniteClient, ClientConfigfrom cognite.client.credentials import Tokenfrom msal import PublicClientApplication, SerializableTokenCachefrom publicdata_credentials import credentials as credsclass OIDCInteractiveRefresh: """Convenience class for authentication toward CDF with Open ID Connect interactive refresh. Based on the following exmaple: https://github.com/cognitedata/python-oidc-authentication""" def __init__( self, tenant_id: str=None, client_id: str=None, cdf_cluster: str="api", cognite_project: str=None, client_name: str="" ): """ Initialize authenticator class. Args tenant_id : The ID for the CDF tenant. Type: str client_id : The ID for the CDF client. Type: str cdf_cluster :
tag2 = "pi:112558"tag1 = "pi:112556"start = datetime(2016, 1, 1)end = datetime(2022, 11, 1)granularity = "1d"data = client.datapoints.retrieve(external_id=[tag1, tag2], start=start, end=end, granularity=granularity, aggregates=["average"]).to_pandas()data.columns = ["Upstream", "Downstream"]data["dP"] = data.Upstream - data.Downstreamfig, ax = plt.subplots(figsize=(7, 3), dpi=200)ax.plot(data.dP, color="black", lw=1, label="dP (1d avg)")ax.axvline(datetime(2020, 11, 11), color="salmon", ls="-", lw=0.75, label="Endring til ~61 bar", zorder=1)ax.grid(ls=":", lw=0.5)#ax.set_ylim(40, 65)#ax.set_xlabel("Dato")ax.set_ylabel("dP (bar)")ax.tick_params("x", labelrotation=45)ax.set_xticklabels(ax.get_xticklabels(), ha="right")ax.legend(bbox_to_anchor=(0.5, 1.0), loc="lower center", ncol=2)ax.xaxis.set_major_locator(mdates.MonthLocator(interval=6))ax.xaxis.set_major_formatter(mdates.DateFormatter("%Y-%b"))fig.tight_layout()fig.savefig("dP_first_stage_separator.png")
As a comparison, here is a figure made in Python with matplotlib.
Here is a screenshot of a PNG export in Charts. It is in practice just a screenshot of the Charts interface. The list of TSs is not relevant, and especially not the invisible TSs. The labels are too small and have a too low contrast I would like to change the granularity of the displayed TS to show some peaks that are hidden by the averager. Legend missing Difficult to check a value for one month, since the tick labels are year only.
Awesome, that’s great! :)
You should try to scroll while the mouse cursor is in the plotting area. This will zoom the time axis :)Holding shift while dragging lets you select a zoom window.
Ah, very nice, exactly what I was requesting :)
Yes, I would like to keep the discussions going. I have a few other suggestions as well. How do you want to organize the discussions?
@Eric Stein-Beldring Thanks, that’s great to hear! My work e-mail is anders.brakestad@akerbp.com, andmy personal (and github) e-mail is anders.brakestad@gmail.com (not sure which one you were after).
Hi again,Yes, I answer clarified some restrictions on searching and filtering. In the specific case in the original post, I cannot filter on this parameter in the metadata, so I need to do it ad hoc.Thanks for the help!
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.