Skip to main content

Product Ideas Pipeline

1170 Ideas

Checklist items to be generated from events in CDFGathering Interest

I have an idea to enrich the functionality and possibilities in checklists. When you make  a task in a Template you should have the opportunity to trigger the task on an event in CDF. It is much the same as we trigger a task on a scheduled time today, but it need a bit more logic.An example can be that an transmitter are drifting and needs to be washed on a specific value. We have on the oil rig I work at an analyzer that is sensitive to growth of algae from sea water, and therefore have a false positive on chlorine. We have an High High alarm that trips our SRU package, so we need to wash off the algae before the value from the transmitter reach this HH alarm. Since this transmitter have a H alarm at  320mV and a HH alarm at 350 while “normal” value is about 190-210 we usually wash this at 250mV. Today CCR personnel see the value rises and call instrument techs to wash it. So if we instead could make an task in our template that generate a checklist-task to wash the analyzer and register if we change filter upstream, value before/after and so on if process value of the transmitter exceeds 250mV, There are many other events that also will be interesting if we have this opportunity.Another example can be that there is a new version of a document pushed to CDF that you should read, and register that you have read it. or if you should print it and update a paper folder.I really think that the opportunities to automate manual tasks will be many with this functionality. If there are questions don’t hesitate to contact me.   

Access to raw table metadata through API or SDKPlanned for development

HiFor a dashboard use case we are working on I want to extract a list of the column names in each raw table we have in our staging area. At the moment, it does not seem to be a way of doing this. I have made two very hacky ways of accessing this information (see the code example below), but they are either very time consuming because of inferring the raw schema, or it does not return anything, because the table has two many columns and it times out. This makes this method unfeasible when running the scripts for our whole environment, which would need to happen regularly. I feel like there has to be a better way of doing this. I know raw is a schemaless service, but the columns do exist. Having this information would greatly improve our efforts in getting a better overview of our data.from pydantic import BaseModel, Fieldclass RawTable(BaseModel): database: str table: str def to_friendly_name(self) -> str: return f"{self.database}.{self.table}" def get_inferred_raw_schema(self, cognite_client) -> Dict[str, Any]: schema = cognite_client.transformations.preview( query=f"select * from `{self.database}`.`{self.table}` limit 100" ) return schema.schema.dump() def get_raw_schema_from_profiler(self, cognite_client) -> List[str]: res = cognite_client.post( url="/api/v1/projects/[INSERT_PROJECT]/profiler/raw", json={"database": self.database, "table": self.table, "limit": 1000} ) return list(res.json()["columns"].keys()) Thank you!Sebastian