@Thomas Sjølshagen - great news about the Filtering. Just showed it to my colleagues and we are eager to try it out!
@Thomas Sjølshagen , I assume you refer to FDM as the tool we have tested. @Robert Solli tested FDM quite thoroughly about 1 year ago and provided feedback to Cognite at the time. Since then I know things have happened and we are quite eager to test FDM again this year. We anticipate that it will be a big step in the right direction, but will have to complete our work on the Neat initiative before we can (re)-start testing FDM
@Thomas Sjølshagen or others, any update on the maximum metadata key length? According to the current API documentation it still is at 128 bytes. Must we consider workarounds or is there still hope for a change within a couple of months?
This is the problem, and the solution. The externalId in one table is the source_externalId in the relationship and then the target_externalId in the relationship equals the externalId in the third, view image
Thanks for your reply Shakya, I just want to emphasize the need for graph traversal. With contextualization being one of the main benefits and selling points of Cognite, it is very important that once we have made the effort of contextualizion our data then we can also benefit from this effort by traversing the datamodel to look for information that is related, but not necessarily in a parent-child relation. Not all things of interest can be directly coupled and will require traversal.
Sorry, but I dont see the code-samples that are referred to by @omarakabbal . Attachments / links?
@Thomas Sjølshagen not sure if my question waere unclear or if the conversation just slowly died? Can we expect to see an increased max metadata key length and how do I know how many bytes a character need?
I think our primary concern is to set up a controlled framework (e.g. from Gitlab ) that allows for creation of groups, datasets and connect this with AD groups. Is there an API for defining groups? This will make management easier ( i.e. new users/projects who want to use the platform defines their needs as code and control by admin team is done via approval of merge requests ). Then comes the access to subsets of data in different datasets. The use case is that Statnett as Transmission System Operator (TSO) stores data that belongs to more than hundred Distribution System Operators (DSOs). DSO data is scattered across datasets. We would like to share data with DSOs, ours and their own raw/processed data, without for instance extracting the data that resides inside different datasets and putting that on a separate tenants. We could create datasets for each DSO, but then other usecases that need data from many or all DSOs will become cumbersome.
@Thomas Sjølshagen thanks for your very helpful reply. It certainly will give us a lot more flexibility if you increase the maximum metadata key length to 256 bytes on Events. Any timeline for that change? One more question, what is the standard encoding of metadata? Is it UTF-32 for both keys and datafields? And is it the same accross all relevant resource types (assets, events, timeseries … )? The way I understand it is that with UTF-32 a 256 byte limit on Event key gives 256/4 characters, while for UTF-8 and UTF-16 we would of course have more (and it is a bit more complex to calculate the maximum number of characters as different characters are encoded with different lengths).
Q5) And any plans for allowing Cognite applications that display medatada fields, like ADI, Fusion Data Explorer, to detect the presence of nested structure and display this in a user friendly way?
Would it be possible to let normal and syntetic time-series share the same back-end, but that syntetic time-series have a time-to-live field, e.g. default set to 1h. In the example, after 1 hour the syntectic times-series is automatically deleted. Then we would have the min and max generated for us ( maybe not fast enough, but possible to fix?) and we could use the syntetic time-seres just like the normal ones.
( deleted, wrong topic )
Impressed with the response-time and quality of your answers. Kudos!
Thanks a lot Håkon. We are comparing the measurements from some standard sensors with those from fiscal meters that by design display hourly averages. So, when averaging the measurements from standard meters we needed to know what was done so they are averaged in the same way.
Thanks, I agree with the computed value. The last thing I wonder about is how/if I can be sure that no matter how the original datapoints are distributed with respect to time, the average for hourly aggregates computed at say 01:00 will always be based on the weighted averages over the timespan from [01:00; 02:00[ .
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.