Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Thank you for the request @Pedersen Jon-Robert Your input has been added to our product management prioritization tool (Productboard) for evaluation.
When a nextCursor value is returned from an API call, is this cursor value valid forever or does it expire? Unless we change something in the api w.r.t content in the cursor, or encryption key, it is valid forever. Doesn’t store info like cluster, project or filter parameters. I.e. it’s “valid until it breaks”. If it expires, how long is it guaranteed to be valid? (This documents part of the contract of the API.)Doesn’t expire, but we’re considering the viability of making it time out (maybe an hour without accesses = let it expire ..?)When we call an API subsequent times with the same parameters and provide a cursor value, are we getting data at the time of the current API call or cached data from the first time the API was called with those parameters? Not using a cached copy of data so if you use a cursor but change the parameters, the API might serve data you’re not expecting.If subsequent calls to the API with a cursor are NOT returning results of a cached copy and data has been i
@RobersoSN Thank you for summarizing the insight. We’ve captured similar insights and will add this in our Productboard for processing in the context of our Flexible Data Modeling initiative.
@Andreas Kimsås - Is there an average size of the description field where you typically would not need to truncate the data?
Hi Robert, This is a request we have captured as part of previous discussions with y’all. Thank you for the additional detail.
It seems logical that the API would be able to support the same range of timestamps that the underlying codebase supports. Agreed @Mark Felder. We do need to make sure we’re not breaking anything when we allow dates based on the fully refactored, multi-cloud support capable software stack that is now underlying the Time Series API. The constraints w.r.t to dates prior to 1971 are from the old stack (which we migrated off of before the holidays). So we’re having to plan some work in order to implement support for dates prior to 1971 and up to (potentially) Dec 31. 2150.
Hi Ben,This is actually an API limitation that the Python SDK steps into.We do not currently support timestamps prior to Jan 1st, 1971 for a time series (so “no” to your 2nd question).The underlying codebase can support timestamps from 1900 to 2150, but changing the API is being evaluated (impact to the API, existing solutions, etc).I’m creating a ticket for the request so our engineers can start thinking more structured about it, and describe possible workarounds, etc. Thank you for the report!
Thanks Ben! Will both create a ticket so we can update documentation / clarify, and respond with some answers after I’ve collected authoritative responses across the teams.
Hi @Anders Albert,I’ve added your feedback as a bug report in our internal ticketing system, against the Python SDK. Engineering will consider the impact of modifying the current behavior, and can update this topic once we know more about if/how and when we may be able to address it. Sorry about the delay in getting back to you!
Additionally, we have a feature defined to expand short notation for data point retrieval to support the same notation for the past as well (i.e. negative short notation values). It’s in the Product Management queue for consideration in 2H2022, but we do not have an explicit schedule, nor a committed plan for delivery right now.
Hi @Ben Brandt,Thank you for the insight and feedback!At present, our focus is to enhance our Azure AD (AAD) support. Auth0 is one of the options we’re considering in the context of expanded IdP support, but we do not currently have that specific expansion planned and committed for an upcoming CDF release.Your feedback and request will be added to help us prioritize IdP support expansion relative to other Cognite Data Fusion platform enhancements.
Hi @RobersoSN, Has Cognite done any exploration around flexible time-series modelling? Not sure I’m following what you’re asking for when you say “flexible time-series modelling”? Which aspect(s) of the model(s) should be flexible?Are we talking about flexible in terms of operations on two or more time series? Things like merge/combine, split, persist merged/combined? Update independent time series, but reflect the updated data in pre-existing merged “product” of two or more time series where the merged product includes the time series that were updated?Persisting computed time series from data in one or more existing time series?Something else altogether?
@Andreas Kimsås Sorry, this slipped my mind!We’re looking at how/whether we have the capacity to include expanding the metadata key length to 256 in one of the upcoming releases, but won’t know which until the team commits in one of the upcoming planning sessions. As for key charset, a quick scan of our sources indicate we use UTF-8.
Hi and thank you for the insight @henning.kvinnesland ! I’ve added this problem as “Product Feedback” here on the Hub.Please upvote and provide any additional commentary you may have there, so we can be sure to include the information in our discovery and assessment process for Time Series and Synthetic Time Series.
Hi @RobersoSN, really sorry about the (super) tardy response!The time series documentation is incorrect wrt to the number of metadata keys (i.e. the limit isn’t supposed to be 16). The actual limit is supposed to be 256 key/value pairs and I just got a PR to update the docs for time series and sequences to: Limits: Maximum length of key is 128 bytes, up to 256 key-value pairs, of total size of at most 10000 bytes across all keys and values.For events we believe we may be able to increase the key length to 256 bytes without adverse effects, if that might help a little? Obviously doesn’t address the value related workarounds you’re referring to! What pitfalls should we be aware of for this solution? We support up to 128 000 bytes for the value side, but anything above 8K (8192 bytes) will not be sortable or filterable. For those cases, you will get all of the data in the value field returned. Or, are there any access management tools for segregating databases in RAW There are not, and w
Thank you for the input Robert!This seems like a reasonable thing for the SDK to be capable of doing, so I’ve added your insight to Productboard and will link it to a feature/idea for the Python SDK to more dynamically handle API limits (batching/chunking and back-off with appropriate warnings if necessary). We will likely need more information as we investigate ideas on how to implement the functionality and hope it’s OK to reach out as needed?
We haven’t got immediate plans around expanding our metadata capabilities (field lengths, nesting, etc) for time series, but I have added this information as insights for our product development prioritization process so we can do some discovery across customers for needs/expectations/problem sets to solve for. Will also update this thread as we progress.
Hi Morten,At this time, unfortunately, we do not have a simple way of getting the last recorded value of a Time Series, except by scanning through the entire series as you describe. I’m adding this insight to our product requirements board for analysis and prioritization. PS: I’m assuming you’ve seen and are referring to something more advanced than the getLatest ( …/timeseries/data/latest) endpoint. - https://api.cognitedata.com/api/v1/projects/{project}/timeseries/data/latest
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.