Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Thanks @Thomas Sjølshagen, looking forward to seeing your proposals implemented. Are you impaced by synthetic TimeSeries for your addition tech @Eric Stein-Beldring?
Heya Thomas, We have use-cases for all of the above. In terms of value, we’re most interested the persistence of computed time-series. And, to have such a feature be recursive. So if I persist the addition of A and B into M’, and later persist M’ / C as N’. If I backfill some data in A, then I want the computation to be re-run for both M’ and N’. Does this make sense? /R
That sounds very nice. We’d love to be involved in some workshops for the workflow. Just hit me up for (almost) whenever.How about the “live-ness” of data? Are you planning on having a pipeline for maintaining state between all Domain/Solution models when changes come from the source? We’re especially interested in what you think about the time dependency of flexible models. In my above example; the physical trafo might burn up, and be replaced with another, all the while the function does not change. This is the case for sensors on the trafo as well. The physical equipment might burn, but after replacing the functional model remains the same. Capturing this change in alarms, and context for changes in load tolerances, is important. Another example we have is for the modelling of power lines. When a line is upgraded, say from 300kV to 420kV. The measurement equipment stays the same. In our current model, a TimeSeries then holds the load for both the 300kV and the new 420kV line. This
Yep, that’s where we started. And a quick update before we meet tomorrow is that we’ve pivoted from HightCharts and towards Cytoscape.js for visualization. The render engine broke under the back of a couple hundred nodes, and a few thousand edges.
Yeah bud! We’re currently iterating on a GraphQl API as a middleware between CDF and a typescript frontend. For now we’re exploring how well HighCharts handles graphs. It’s a nascent project, and we’re budgeting a week or two of developer time in before years end. We are in agreement in that we should de-couple the visualization from the business logic. This is where we found the need for the middleware API. It follows that a major point of interest for us is what you envision for the data-structures that the visualization consumes. As you say, we’re excited for the iteration. Robert
Hi David, Thanks for your response.We’re currently working on a prototype graph exploration API internally at Statnett. If you wish, we can invite you in for a session on our experiences once we have something we use internally.Secondly, it’s important to note that the graph-traversal tool is not only central in our vision for the Developer Experience, as you say. We will (probably) be changing our end-user facing UX in 2022 to be graph-based and not tree-based. @Ola Øyan and @Morten Knutsen are stakeholders here.This has implications for the scope of the graph exploration service, especially as it pertains to filtering and traversal of the graph. I.e a view of the model from line to substation might include switchgear etc. that we don’t want to show the end user. Does that filtering happen client or server side? Look forward to hearing your thoughts,Robert
Hi David, The clearest use is one which the tabular view might cover neatly as well. Relationships are directional, and so the app-developer needs insight into that directionality. The lack of this feature adds friction.For the app-developer or the data-scientist the relationship itself is also an object of interest that you want surfaced in your graph-exploration. Most notably for the Labels associated with a given relationship. Which is a necessary filter for us for performance reasons.We are often designing traverses in the graph. And, I don’t want the app-developer to have a note-pad up and noting the path as they explore. The table view cannot replicate this feature. Lastly, an DAG is much more naturally explored via nodes and edges. A table makes sense for a tree, which our model is not. As it stands now the app-developer or designer cannot explore the model on their own. And, must involve a subject-matter-expert and data-scientist for each new feature adding a graph-traversal.
Thanks, Thomas. That is a useful clarification regarding the metadata keys/values. And, 256 bytes for keys might be a bandaid for now at least.For now we seem to be very eagerly awaiting the deployment of schema services to the SN cluster. Thanks for the heads up regarding filtering, and access management in raw.
Hi, please do not hesitate to reach out. I’m available on pretty short notice for these kinds of meetings. @erlend.vollset Our hierarchy is pretty wide in a contextually rich region. E.g. Substations (the primary organizing container of assets) share a common parent with ~1e2 other Substations, such that we break on 100k assets in subtrees for parents of Substations. Much of our structure is stored in relationships, and so we seldomly use the hierarchy for context (we raised a separate issue on the limitation on 100k assets in subtrees).
@Thomas Sjølshagen we’re currently considering using RAW as a lazy loading backend for these events. Given the full JSON structure support we envision events with a row-reference with the payload from the source represented there. What pitfalls should we be aware of for this solution? Especially as it relates to queries of 10k+ rows, for example. Or, are there any access management tools for segregating databases in RAW?
Hey Andreea, Our use-cases straddles the intended use of just bare-bones functions and what AIR allows. We’d be happy to share more, and I imagine we can get @Emilie Dahl to invite you when we’ll check in at the next step in our roadmap with functions. We’re primarily concerned with access to a gitlab server hosted on-prem at the customer. Kind regards,Robert
Hey, has there been any follow-up on this? We would like for there to at least be no silent failures on this error, as is currently the case for the fusion front-end. /R
It’s worth noting that this is a problem that has cropped up twice in our solutions.@Ola Øyan can probably comment on the value of these solutions - Custom Calculations and Power Transfer Corridor flow timeseries. We’ll be very glad to be included in the task, and kept up to date on its priority.
The post wasn’t so clear about the problem:The amplitude of the signals are different enough that it might sway the analysis. The amount of data is such that we have to use aggregation, a couple of million datapoints. We hold the amount of data-points constant in the bottom view, such that if the user would zoom to a peak, it would resolve closer to what’s shown in the uppermost plot.
@Håkon V. Treider that’s what I figured too - but I forgot to check for duplicated timestamps and got a miscount there too.But, the combo was the answer! Inclusive endtime from the source, and a few duplicate rows.meter_data = client.datapoints.retrieve( start=dates[0], end=full_dates[-1] + timedelta(hours=24), external_id="ts_externalid" ).to_pandas()meter_data.shape>>> (2952, 1)len(retrieved_df["DATE_TIME"].unique())>>> 2952 Thanks for your help solving todays PEBCAK @Patrick Mishima and Håkon : ))))))))
Hi Eric! You’re understanding me correctly.In our living (stateful-asset) datamodel, we encode semantics in the labels of assets. Assume that it’s information akin to whether an item in a store was recently discounted, subject to update a couple of times a week. For our use-case we’re currently iterating on how to use labels for Assets and Relationships, but we’ll likely be expanding the use of labels to the other resources you mention.So, the information I’m interested in is a very quick way of charting which assets in a subtree have what labels. The purpose of which is to get good qualitative information on whether my model is consistent with my expectations & the source. Hope that wasn’t too unclear!
Hi Anita, sure let’s see about an example. To be clear, we’re in the design phase of this service. So our question is mostly “does this raise any red flags” from your engineers. We have two services, one which produces and updates events on conditions in data. And another which consumes and updates events for communication to another system. In sum, a somewhat complex data-monitoring tool.The type of sequence of operations we’re concerned about (we’re still drawing architecture for “who” owns what part of the monitoring process) is: Let the event in question, E, have the following two metadata fields:{"communicated": "foo", "status": "bar"} Let A be the monitor which produces events, and updates the field “status” in the event. Let B be the consumer who listens for events, and updates the field “communicated” in the event.Can we push updates to E from both A and B simultaneously - without regard to potential race-condition problems, or the like? Thanks, Robert
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.