Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hi @Stuart Armstrong and thanks for reaching out!We would not expect to see any real and notable performance hit due to the number of time series in existence for a project when you do a data point retrieval (or metadata retrieval) for a specific time series using the Time Series API.If there is a performance hit, it is most likely going to be very minor and because the # of time series affects our ability to keep the cache(s) current for that specific time series at the time you’re reading it (probabilities).Hope that helps!
Hi @Snehal Jagtap, and thanks for reaching out. It would be very helpful if you included the request payload you’re trying to use, resulting in the problem you’re experiencing?
Hi @sarojbala,We had to push the GA release of Enhanced Data Modeling (FDM) out to our April release. At that point, the documentation for all of the approaches we intend to support for accessing and managing data and data models in FDM will be made available to the public.
Hi Juan, and thank you for reporting this. I’ve notified the teams of the issue using our internal problem reporting process.
IMO, our asset hierarchy is extremely flexible and can be used to build whatever asset structure you need. Be it by linking assets in a tree, or creating relationships - Relationships resource. This includes enriching the hierarchy/linked assets using different other CDF Resources (with diagrams, labeling data - assets/relationships -, events of different types, Time Series being linked to assets that represent a sensor for instance, making annotations, etc). If you’re referring to our implementation of/how we have defined metadata for an asset, or a time series, sequence, file, etc - it’s essentially a JSON object of whatever structure you want. It’s size limited, but under the max size, it can have whatever structure you choose. Does this answer your question(s)?
Hi @EViswanathan and thank you for this question.I’m not entirely sure if your question is whether CDF can take defined assets (ingested asset information) and after that order/link/update them automatically to match a specific standard?Or, are you asking if we have standards based templates representing asset hierarchies that can be used when defining the asset hierarchy and then ingesting data into that standards based hierarchy model? Or am I misunderstanding completely?
Hi @Harsha / @ibrahim.alsyed and thank you for the input.Upsert will be included in the new Enhanced Data Modeling services and, but the state of Upsert for the more "traditional" Coginite resource APIs - Assets, Events, Labels, Relationships, etc - is under consideration right now as we haven't fully decided what we will be doing with those APIs as we roll out and iterate on Data Modeling after GA in April. We have been asked for upsert support through other channels, but with the focus on Data Modeling we have elected to not prioritize that work at this time. I have captured your product idea and we will use this as part of our consideration for when and how we choose to implement "Update or Insert" functionality as we decide how we will continue to provide these APIs.
Hi @ibrahim.alsyed and thank you for this suggestion.If we view this request in the context of the new Data Modeling functionality (currently in Beta) I believe it will be possible to add a list of aliases (list/array of strings) to each data model which would satisfy this need?Right now, extending the existing API for the resources (Assets, Events, Files, etc) with this functionality isn’t part of any plan.I’ve captured your feedback and will include it in our design considerations for the Time Series, Sequences, Enhanced Data Modeling (aka Flexible Data Modeling) and other resources.
Hi again Julieta,When you write “scripts”, you mean routines being run as CDF Functions, a Jupyter notebook, or something else?Guessing that’s what you’re referring to(?), and I realize the indentation rules of some of the scripting languages can be a bit frustrating, as are the warnings/errors from the language interpreters. Unfortunately, that specific behavior isn’t something we have a large degree of control over beyond passing on what they’re providing. As for the codes you’re seeing, are these the status codes (2XX, 3XX, 4XX, and 5XX) and associated “hints” (when/where the hints exist)?
Hi @Adarsh Dhiman, and thank you for insight!Can you not use a standard approach to managing the OAuth refresh_token in the environment, or is this a request for some way of automating the refresh process (asynchronously?), alternatively some SDK specific way of storing the refresh token for when it’s needed, outside of your own application/script/program?
Hi Julietta, and thank you for your question! Do you have any examples of error messages that are “less than helpful” you could share, so we can try to fix them? Alternatively, work with Cognite Support to collect the messages you find and manage the process of us improving them?Thanks, and I hope this isn’t too detrimental to your experience with CDF!
Hi and thanks for your question!We do not allow changing the value of the isString property once the Time Series is created. We would need to figure out managing data conversion if we did, so we opted to keep it immutable. Hope this helps clarify!
Hi,Thank you for the suggestion.We are constantly evaluating and prioritizing the capabilities we want to make available in our Time Series and Sequences offerings. I will make sure we include this suggestion in our internal discussions, and bring it forward for consideration relative to other needs/problems presented from SLB.
@Sunil Krishnamoorthy - FYI ref Transformations including a geoLocation option.
Hi @eashwar11, The Assets API already supports geo-location data. See the geoLocation parameter under https://docs.cognite.com/api/v1/#tag/Assets/operation/createAssets. As the owner of our ongoing Data Explorer enhancement work, @Andreea Pastinaru will need to be the person responding to that aspect of your request.
Hi @Prashant Chauhan,What sort of change process are you looking for (and what kinds of engagements)? I.e. what’s the type of conversion(s) you’re looking for and what (business/technical) problem(s) would the conversion help you solve?
Thank you for the suggestion @oleksandr.cherepnia!We’re in the process of rolling out the API level functionality to GA - currently in Alpha - and will, as part of the process of shepherding this enhanced search (and sort) functionality to General Availability, be working to make it available in the official SDKs as well.
Hi @Trond Saure, and thanks for the question.Also, there are some potential expansions of / changes to the query syntax for data models in the Transformations service, before we go to GA but those haven’t been decided on fully, as of yet.
Hi Juan, and thank you for this considered expectation. We’re thinking of the initial release of FDM as the starting point for the service. To us, that means we still have some difficult problems to resolve and evolve the functionality after the GA release, including the one of how to handle model changes and their implications with a more robust suite of tools. The team is excited to work with you and others at mapping out opportunities, solutions and needs in this space!
Hi @Edvard Holen and thank you for this suggestion. It reads a little like a combination of supporting “bitemporality” and a feature under development we’re currently calling “subscriptions” for Time Series..?Bitemporality is one possible approach to give you data point history for a time series. “Subscriptions” is being designed to return all changes - additions, updates, deletions - for data points in one or more time series in a single API call.When subscriptions are available, you could potentially also handle bitemporality using a small set of time series representing “current” and “old”, as a data model type?Unfortunately, we do not currently have immediate roadmap plans for bitemporal time series, but subscriptions are on our roadmap for 2023.
Thank you for this reminder @RobersoSN!The current limit was set in order to ensure appropriate performance from the service. We believe we might be able to increase it somewhat but have concerns about the impact to expectations and latency we are trying to sort out.Have a couple of proposals in the queue and have reached out directly to you about this.
Hi @Andreas Kimsås,With the work going into Flexible Data Modeling, we've not been able to prioritize extending the metadata key length. So it's not in the plans for the next couple of months, sorry.
And ignore that reply… *sigh* Too many moving parts, sorry!
@Ben BrandtSince we didn’t support timestamps prior to 01.01.1970 and we implemented the timestamps prior to that date as negative values, we believe it’s not a breaking change. Do you have a different experience after it was added to the API?
@Edvard Holen It looks like, as of approx July 13th, datetime_to_ms() was updated in v3.1.0 of the Python SDK to support timezone aware datetimes.Have you had a chance to try that version of the SDK yet?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.