Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hi Morten,At this time, unfortunately, we do not have a simple way of getting the last recorded value of a Time Series, except by scanning through the entire series as you describe. I’m adding this insight to our product requirements board for analysis and prioritization. PS: I’m assuming you’ve seen and are referring to something more advanced than the getLatest ( …/timeseries/data/latest) endpoint. - https://api.cognitedata.com/api/v1/projects/{project}/timeseries/data/latest
We haven’t got immediate plans around expanding our metadata capabilities (field lengths, nesting, etc) for time series, but I have added this information as insights for our product development prioritization process so we can do some discovery across customers for needs/expectations/problem sets to solve for. Will also update this thread as we progress.
Thank you for the input Robert!This seems like a reasonable thing for the SDK to be capable of doing, so I’ve added your insight to Productboard and will link it to a feature/idea for the Python SDK to more dynamically handle API limits (batching/chunking and back-off with appropriate warnings if necessary). We will likely need more information as we investigate ideas on how to implement the functionality and hope it’s OK to reach out as needed?
Hi @RobersoSN, really sorry about the (super) tardy response!The time series documentation is incorrect wrt to the number of metadata keys (i.e. the limit isn’t supposed to be 16). The actual limit is supposed to be 256 key/value pairs and I just got a PR to update the docs for time series and sequences to: Limits: Maximum length of key is 128 bytes, up to 256 key-value pairs, of total size of at most 10000 bytes across all keys and values.For events we believe we may be able to increase the key length to 256 bytes without adverse effects, if that might help a little? Obviously doesn’t address the value related workarounds you’re referring to! What pitfalls should we be aware of for this solution? We support up to 128 000 bytes for the value side, but anything above 8K (8192 bytes) will not be sortable or filterable. For those cases, you will get all of the data in the value field returned. Or, are there any access management tools for segregating databases in RAW There are not, and w
Hi and thank you for the insight @henning.kvinnesland ! I’ve added this problem as “Product Feedback” here on the Hub.Please upvote and provide any additional commentary you may have there, so we can be sure to include the information in our discovery and assessment process for Time Series and Synthetic Time Series.
@Andreas Kimsås Sorry, this slipped my mind!We’re looking at how/whether we have the capacity to include expanding the metadata key length to 256 in one of the upcoming releases, but won’t know which until the team commits in one of the upcoming planning sessions. As for key charset, a quick scan of our sources indicate we use UTF-8.
Hi @RobersoSN, Has Cognite done any exploration around flexible time-series modelling? Not sure I’m following what you’re asking for when you say “flexible time-series modelling”? Which aspect(s) of the model(s) should be flexible?Are we talking about flexible in terms of operations on two or more time series? Things like merge/combine, split, persist merged/combined? Update independent time series, but reflect the updated data in pre-existing merged “product” of two or more time series where the merged product includes the time series that were updated?Persisting computed time series from data in one or more existing time series?Something else altogether?
Hi @Ben Brandt,Thank you for the insight and feedback!At present, our focus is to enhance our Azure AD (AAD) support. Auth0 is one of the options we’re considering in the context of expanded IdP support, but we do not currently have that specific expansion planned and committed for an upcoming CDF release.Your feedback and request will be added to help us prioritize IdP support expansion relative to other Cognite Data Fusion platform enhancements.
Additionally, we have a feature defined to expand short notation for data point retrieval to support the same notation for the past as well (i.e. negative short notation values). It’s in the Product Management queue for consideration in 2H2022, but we do not have an explicit schedule, nor a committed plan for delivery right now.
Hi @Anders Albert,I’ve added your feedback as a bug report in our internal ticketing system, against the Python SDK. Engineering will consider the impact of modifying the current behavior, and can update this topic once we know more about if/how and when we may be able to address it. Sorry about the delay in getting back to you!
Thanks Ben! Will both create a ticket so we can update documentation / clarify, and respond with some answers after I’ve collected authoritative responses across the teams.
Hi Ben,This is actually an API limitation that the Python SDK steps into.We do not currently support timestamps prior to Jan 1st, 1971 for a time series (so “no” to your 2nd question).The underlying codebase can support timestamps from 1900 to 2150, but changing the API is being evaluated (impact to the API, existing solutions, etc).I’m creating a ticket for the request so our engineers can start thinking more structured about it, and describe possible workarounds, etc. Thank you for the report!
It seems logical that the API would be able to support the same range of timestamps that the underlying codebase supports. Agreed @Mark Felder. We do need to make sure we’re not breaking anything when we allow dates based on the fully refactored, multi-cloud support capable software stack that is now underlying the Time Series API. The constraints w.r.t to dates prior to 1971 are from the old stack (which we migrated off of before the holidays). So we’re having to plan some work in order to implement support for dates prior to 1971 and up to (potentially) Dec 31. 2150.
Hi Robert, This is a request we have captured as part of previous discussions with y’all. Thank you for the additional detail.
@Andreas Kimsås - Is there an average size of the description field where you typically would not need to truncate the data?
@RobersoSN Thank you for summarizing the insight. We’ve captured similar insights and will add this in our Productboard for processing in the context of our Flexible Data Modeling initiative.
When a nextCursor value is returned from an API call, is this cursor value valid forever or does it expire? Unless we change something in the api w.r.t content in the cursor, or encryption key, it is valid forever. Doesn’t store info like cluster, project or filter parameters. I.e. it’s “valid until it breaks”. If it expires, how long is it guaranteed to be valid? (This documents part of the contract of the API.)Doesn’t expire, but we’re considering the viability of making it time out (maybe an hour without accesses = let it expire ..?)When we call an API subsequent times with the same parameters and provide a cursor value, are we getting data at the time of the current API call or cached data from the first time the API was called with those parameters? Not using a cached copy of data so if you use a cursor but change the parameters, the API might serve data you’re not expecting.If subsequent calls to the API with a cursor are NOT returning results of a cached copy and data has been i
Thank you for the request @Pedersen Jon-Robert Your input has been added to our product management prioritization tool (Productboard) for evaluation.
@Edvard Holen It looks like, as of approx July 13th, datetime_to_ms() was updated in v3.1.0 of the Python SDK to support timezone aware datetimes.Have you had a chance to try that version of the SDK yet?
@Ben BrandtSince we didn’t support timestamps prior to 01.01.1970 and we implemented the timestamps prior to that date as negative values, we believe it’s not a breaking change. Do you have a different experience after it was added to the API?
And ignore that reply… *sigh* Too many moving parts, sorry!
Hi @Andreas Kimsås,With the work going into Flexible Data Modeling, we've not been able to prioritize extending the metadata key length. So it's not in the plans for the next couple of months, sorry.
Thank you for this reminder @RobersoSN!The current limit was set in order to ensure appropriate performance from the service. We believe we might be able to increase it somewhat but have concerns about the impact to expectations and latency we are trying to sort out.Have a couple of proposals in the queue and have reached out directly to you about this.
Hi @Edvard Holen and thank you for this suggestion. It reads a little like a combination of supporting “bitemporality” and a feature under development we’re currently calling “subscriptions” for Time Series..?Bitemporality is one possible approach to give you data point history for a time series. “Subscriptions” is being designed to return all changes - additions, updates, deletions - for data points in one or more time series in a single API call.When subscriptions are available, you could potentially also handle bitemporality using a small set of time series representing “current” and “old”, as a data model type?Unfortunately, we do not currently have immediate roadmap plans for bitemporal time series, but subscriptions are on our roadmap for 2023.
Hi Juan, and thank you for this considered expectation. We’re thinking of the initial release of FDM as the starting point for the service. To us, that means we still have some difficult problems to resolve and evolve the functionality after the GA release, including the one of how to handle model changes and their implications with a more robust suite of tools. The team is excited to work with you and others at mapping out opportunities, solutions and needs in this space!
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.