Skip to main content

Product Ideas Pipeline

Filter by Idea Status

Filter by Topic

1280 Ideas

Erik SveinsvollCommitted ⭐️

Excel plugin for CDF with PI Datalink-like functionalityGathering Interest

We need an Excel-based solution for CDF that can serve as a practical alternative to PI Datalink, which is currently widely used across existing Aker BP assets.Today, importing data from CDF through OData is possible, but it is limited and cumbersome compared with the functionality and user experience provided by PI Datalink. My experience is that OData can be made to work in Excel, but it does not provide the same efficient workflow for engineering analysis, validation, and reporting.For users unfamiliar with PI Datalink, its main strength is that Excel cells can be directly linked to data tags. These links can return:current values archived values calculated values such as average, minimum, and maximum raw values based on a timestamp or time interval time references taken from other cells in the workbookA particularly important capability is dynamic date handling. If one cell contains the date or time reference, the user can change that single cell and automatically refresh the entire sheet — or selected calculations — for a different period. This makes it very efficient to reuse the same workbook for new dates, validation cases, or reporting periods.This functionality is highly valuable for:quality checking and validation of time series data testing calculations and assumptions transparent engineering and production reporting fast ad hoc analysis directly in Excel workflows where calculation traceability is importantRequested capabilityWe therefore need an Excel plugin or equivalent Excel integration for CDF that provides a user experience similar to PI Datalink, including:cell-based linking to time series/tags support for current, historical, raw, and aggregated values support for cell-referenced timestamps and time intervals simple refresh/recalculation when date cells are changed usability that makes the solution practical for day-to-day engineering workWithout this type of functionality, moving users from PI Datalink to CDF will be difficult for many common engineering and reporting workflows.

Anders Brakestad
Seasoned ⭐️⭐️⭐️
Anders BrakestadSeasoned ⭐️⭐️⭐️

[Atlas AI] Developer experience feedbackGathering Interest

Atlas AI Agent Developer Experience FeedbackIn this post I share some of my experiences as a developer of Atlas AI agents. My goal is to be able to define agents in a modular and reproducible way, making it easy to extend and maintain. I want a workflow that facilitates governance, and an evaluation framework that is flexible, transparent, and functional. All my feedback should be viewed in this light. I am not after the best user experience in the browser UI, although that is also important. This post is about all the stuff the end user does not see. ObservabilityWhen we call client.agents.chat(), the SDK returns a final answer, structured data items, and a reasoning trace. Here is a representative response:{"response": {"messages": [{"role": "agent","content": {"text": "Short answer: N08 flows to two manifolds ...","type": "text"},"data": [{"type": "instance","view": {"space": "sp_process_domain_model", "externalId": "ProcessEquipment", "version": "1.0.0"},"instanceId": {"space": "sp_inst_sol_early_anomaly_detection", "externalId": "cb58108b-..."},"properties": {"name": "N08","downstream": [{"externalId": "c5440d61-...", "name": "East Side Production Manifold"},{"externalId": "c5440d64-...", "name": "East Side Test Manifold"}]}}],"reasoning": [{"content": [{"text": "Executed Find Process Equipment", "type": "text"}]}]}]}} The reasoning field tells us a tool was called. That is the entire observability surface. We do not get:Tool call inputs (the query the agent constructed) Tool call outputs (raw results before LLM processing) Intermediate reasoning (why the agent chose a tool, a filter, or stopped paginating) Per-tool latency Error and retry traces (a tool called six times shows six identical confirmation strings -- no indication of what changed between attempts or why they failed) Token usage per LLM callWhen a tool call fails silently, we have no way to distinguish a bad query from a platform limitation from a discarded result. The only recourse is to open the browser UI, re-ask the question, and inspect the trace manually. For bulk evaluation this does not scale.We need full observability of what the agent does if we are to understand why it fails and to evaluate it properly. An opt-in debug mode on the SDK that returns the complete execution trace -- tool inputs, tool outputs, intermediate reasoning, per-step timing -- would let us diagnose failures programmatically and build evaluators that assess query construction, not just final answers. Context EngineeringThe agent receives context from multiple sources: our YAML instructions, tool descriptions, data model metadata (view and property docstrings in CDF), and a platform-injected system prompt invisible to us.We have no documentation on what each source is for, how they compose, or what the agent actually sees as its assembled prompt. Without this, instruction engineering is guesswork. We cannot tell whether unexpected agent behavior stems from our instructions, the platform prompt, or the data model metadata. We do not know if our instructions are in direct conflict with Cognite's system prompts.Request: Document the context composition pipeline. Expose the full assembled prompt. Show Us the System PromptAtlas AI agents ship with a platform-managed system prompt we cannot see. We need to know what is already on the canvas. Does the platform prompt instruct table formatting? Source citation? Uncertainty handling? Are our instructions overriding or conflicting with default behaviors? We are not asking to modify it, just to see it. runPythonCode in the SDKThe toolkit supports runPythonCode - custom Python functions the agent can call. In the browser UI these run in a Pyodide sandbox. The SDK has no such sandbox, so the agent fails when runPythonCode are present in its configuration - even if the question does not trigger the Python tool.To evaluate via the SDK in our CI pipelines, we must strip the Python tools from the agent config, adding deployment complexity. The evaluated agent then lacks tools that the production agent has. We lose eval coverage and we are measuring a different system.SDK parity with the UI agent runtime is a prerequisite for meaningful evaluation. Performance improvementsThe chat completions are very slow. Up to several minutes for relatively simple questions. The overhead is significant, which limits user-friendliness and really makes evaluation a time-consuming effort.I did a simple test where I gave the same question to the same agent via both the browser UI and the SDK in a notebook. UI needed 2min15min, and the SDK had not finished at 10 minutes. Querying the API itself is fast, so major overhead. DocumentationAn alpha release does not need production-grade documentation. But the features it ships should be documented. We should not have to reverse-engineer the SDK response schema, discover tool limitations by trial and error, or guess at supported query patterns.A short reference covering the response structure, known limitations, and basic code examples would be sufficient at this stage. CDF Built-in Agent EvaluationCDF has a built-in agent evaluation feature. We need transparency on what metrics are computed, which LLM-based evaluators are used, what their prompts and success criteria are, and how scores are aggregated. If we cannot interpret the scores, we cannot act on them.What we need from a CDF evaluation framework:Transparent, multi-metric evaluation. Each metric exposed individually -- not a combined pass/fail. Users need to see why something passed or failed. Scheduled evaluation runs. Manual triggering does not scale. Define a test suite, run it on a schedule or on agent config changes. Conversation evaluation. Not all agent usage is a one-shot question. Follow-up questions and longer conversations need evaluation support too. Debug data. Being able to drill into what went wrong. AI review. Get a pre-trained agent to evaluate the evaluation, suggesting actions on how to improve. Could be very useful and a time saver. Possibility to write custom evaluators would be very cool! We tested the built-in evaluator with a deliberately corrupted reference response. We changed one entity ID out of 50 to a non-existent value. The candidate response returned the correct 50 IDs -- missing the fake one. The evaluation still passed. The evaluator either is not doing entity-level comparison, or is doing it too loosely to catch a single missing entity in a list of 50. We need to be able to tell whether our agent achieves 100% recall. Modular definitionCurrently we need to construct a monolithic YAML-file for deployment. While this works, I have ended up with a custom setup where I split the definition into config.yaml, instructions.md, and tools.yaml. Then I combine and construct the final YAML in a Github Workflow that eventually gets deployed. It would be nice if such a modular approach was supported by the cognite toolkit natively, as I think it makes it easier to touch files and make changes under a version control system.Also, see next point. runPythonCode modular designIf my agent has python code as part of its tooling, then I want to make sure that python code is unit tested. Especially if the agent ends up in production with end users who rely on it. Today my workflow is a disgusting mess of resolving relative imports in order to build a self-contained script that can make it into the final YAML. Python code should be written with separation of concerns, modularity, and testability in mind. So we need a much, much better way of introducing python code to the tool kit. An approach more similar to Functions makes more sense, but I’ll leave the solutioning up to you. I just know that the current set up is not elegant or dev friendly. Thanks for reading all the way to the end :) Have a great weekend! Sincerely,Anders Brakestad

Kubota MamiSeasoned ⭐️⭐️

【Charts】Shift Time Series: Historical Data Overlay IssueNew

At our company, we frequently need to compare specific time‑series trends with historical data. For example, we may want to compare the temperature increase after a plant startup with the trend observed during a previous startup two years earlier.Currently, the “Shift time series” function in Charts allows us to overlay the same tag with a time offset. However, to do so, we must expand the global time range to cover the entire historical span (for example, setting the range from 2023 to 2025), even when our actual analysis focus is limited to a single year such as 2025. This makes detailed analysis more difficult and reduces readability when working with a clearly defined target period. Ideally, we would like to set the display range only to the target time span, while still being able to retrieve and overlay historical data from earlier periods using a time shift.This request has also been raised multiple times in the past, including by Mr. TAKASE , and we understand that many other users similarly recognize this as an important and widely needed feature.While it is technically possible to address this requirement using a custom application, custom applications cannot be embedded into Canvas. Since our primary goal is to visualize and share these comparisons directly within Canvas, this limitation makes a custom‑app‑based solution impractical. For this reason, we strongly believe that this capability should be implemented directly within Charts.We previously found a similar request in an earlier post (linked below), but given the importance and recurring nature of this requirement across users, we would like to submit this request again.https://hub.cognite.com/product-user-community-428/charts-shift-time-series-function-limitations-on… https://hub.cognite.com/ideas/cognite-charts-ability-to-calculate-by-subtracting-two-different-time…

Erik OrmevikPractitioner ⭐️⭐️⭐️

Define Condition-Based CapsulesGathering Interest

There is a clear need for a native way to organize and classify time‑series data into reusable, condition‑based time segments (“capsules”) representing distinct operating states or events.Capsules enable users to define such periods based on signals, logic, or events (for example when a valve is open, a pump is running, a compressor is loaded, a system is in start‑up, or production is above a given threshold), and then apply analysis consistently within those time windows.This functionality is missing in Cognite Data Fusion, making it difficult to perform structured, state‑aware analysis without repeatedly rebuilding filters or custom logic. The problem occurs frequently during process optimization, production and injection accounting, energy and mass balance calculations, equipment performance analysis, and troubleshooting, where engineers need to calculate KPIs such as integrated flow, average efficiency, chemical consumption, or loss rates only during relevant operating conditions.Without a capsule‑like abstraction, analyses become hard to scale, difficult to maintain, and less transparent, increasing the risk of inconsistent results across users and studies and reducing overall analytical efficiency.What impact would a change have?Time savings Better data quality Fewer errors / less manual work Better decision-making foundationCapsule‑based time segmentation would have a major impact on engineer productivity by significantly reducing the time and effort spent on repetitive data preparation. Engineers are typically both time‑constrained and pragmatic: if an analysis requires excessive manual filtering, context rebuilding, or custom logic, it will either not be done at all or be done in other tools that already support this efficiently. Capsules allow engineers to define operating contexts once and reuse them across analyses, enabling faster root cause investigations by isolating comparable operating states and eliminating irrelevant data early. In day‑to‑day work, this directly translates into time savings for common tasks such as integrated volumes, efficiency tracking, loss analysis, and conditional KPIs, making it feasible to perform deeper analysis within Cognite Data Fusion rather than exporting data elsewhere. Without this capability, CDF risks being used primarily for data access and visualization, while detailed engineering analysis is performed in platforms that better support state‑based analytics. Introduce a native capability in Cognite Data Fusion to organize time‑series data into reusable, condition‑based time intervals (“capsules”) that represent operating states, events, or periods of interest. These capsules should allow engineers to easily segment data based on process conditions or logic and reuse the same time windows consistently across analyses, metrics, and investigations. The goal is to enable fast, context‑aware analysis where calculations and KPIs can be evaluated only during relevant operating conditions, reducing repetitive manual filtering and lowering the effort required to perform meaningful engineering analysis. This improvement would make Cognite Data Fusion a more efficient and attractive environment for daily engineering work and root cause analysis, rather than primarily a data access layer feeding other analytical tools.If this capability is not implemented, engineers will continue to rely on manual filtering, ad‑hoc logic, or external tools to perform state‑based and event‑based analysis, leading to unnecessary extra work and reduced analytical quality. Practical consequences include longer time spent preparing data, higher risk of inconsistent or incorrect results, and fewer root cause analyses being performed due to the effort required. In reality, time‑constrained engineers will avoid doing deeper trending and investigations in Cognite Data Fusion altogether if similar analyses can be performed more efficiently in other platforms. This results in lower adoption for advanced engineering use cases, increased frustration among users, and Cognite Data Fusion being perceived primarily as a data access or visualization layer rather than a tool for serious process analysis and decision support. Requested by: ​@kasperdybvad