Managed Transformation Layers

  • 7 March 2022
  • 3 replies

Userlevel 3


At Statnett we’re thinking about our progress towards a world of more flexible data models. Although your introductions have been great,  we’re still confused about the particulars surrounding the transition between types of models.

In particular, we consider the lineage of and managability of transformations of data. In the language of your data-modelling: we want the arrows between Source, Domain and Solution models to be versioned and configured as code. Is this a planned feature? If so, do you have any sketches of how this will look?

Transformations might be what we call “Interfaces” in the Domain and Solution models. I.e. the physical transformer from (e.g.) Siemens is implemented by a functional 300kv to 132kv transformer at the Substation “Foo”. Currently, we model this as two assets (phys. trafo and func.trafo) with a Relationship with label “Implements”. These relationships are more or less hand-crafted. But, we would like to have these mappings, and links, as code. Preferrably so that when a new trafo is loaded in the source, a new instance would appear in each Domain and Solution model. Updates would also likewise preferrably be bubbled up through the models.


Thanks  for your response :)


Best answer by BugGambit 10 March 2022, 14:31

View original

3 replies

Userlevel 4

cc @BugGambit 

Hi Roberto,

Our approach towards flexible data modeling is that you should use UI to learn concepts, and use code to “productionalize”. Hence you should absolutely be able to declare “mapping” between data models (i.e. from Domain ↔ Source) through code and update via CLI. We imagine everything should be versioned and “published” with intent, the data model, and the “mapping” to data directly or via other data models.

We are looking into this concept and planning to prototype a figma workflow (UI driven, with everything having an equivalent flow via code through CLI) sometimes in the coming weeks. We would love to come up with a workflow together with you. 


Userlevel 3

That sounds very nice. We’d love to be involved in some workshops for the workflow. Just hit me up for (almost) whenever.

How about the “live-ness” of data? Are you planning on having a pipeline for maintaining state between all Domain/Solution models when changes come from the source? We’re especially interested in what you think about the time dependency of flexible models.


In my above example; the physical trafo might burn up, and be replaced with another, all the while the function does not change. This is the case for sensors on the trafo as well. The physical equipment might burn, but after replacing the functional model remains the same. Capturing this change in alarms, and context for changes in load tolerances, is important. 


Another example we have is for the modelling of power lines. When a line is upgraded, say from 300kV to 420kV. The measurement equipment stays the same. In our current model, a TimeSeries then holds the load for both the 300kV and the new 420kV line. This context is also important to maintain.

How do you think about these time-dependent modelling tasks? 

Hi Robert,

How about the “live-ness” of data?

The plan is to enhance CDF Transformations to support writing to a flexible data model (source/domain/solution). We built CDF Transformations to keep CDF in sync with data from a source system, and it is natural to extend it to support flexible data models.

How do you think about these time-dependent modelling tasks? 

We are planning internally on how to do precisely that. Model an edge between two nodes, including a valid time span for such an edge. We have several use-cases where this is important and use-cases where someone wants to traverse a graph at a given timestamp (only traverse edges that were valid at a given timestamp).

How we would like to expose it in the product is not decided yet, but your use-case will be an inspiration for it (cc @Anders Hafreager).