Skip to main content
  • 213 Product updates

Q1 2026 Product Release: Delivering SME Empowerment, Enhanced 3D Context, and Precision Search

We are excited to announce the Q1 2026 release of Cognite Data Fusion (CDF). This quarter, our focus is on breaking down technical barriers for Subject Matter Experts and providing a more immersive, high-precision data experience.Empowering SMEs: The New Simulators UI (Beta) Immersive Data Exploration & 3D Context Smarter Search & Precision Tools Atlas AI: Continuity & Control Performance & Readiness Looking Forward: Production-Grade Tailored SolutionsEmpowering SMEs: The New Simulators UI (Beta)We are transitioning from a "Code-First" to a "User-First" approach for industrial simulations. Feature Problem (Challenge) Solution: What You Can Do Now Simulators UI (Beta) SMEs are currently blocked by a "technical wall," where simulator connectors remain locked behind APIs that can be configured only by developers. Empower SMEs to independently configure their own models and use a no-code "Routine Board" for drag-and-drop visual logic orchestration.  Immersive Data Exploration & 3D ContextCorrelating data trends with the physical world is critical for efficient troubleshooting. Feature Problem (Challenge) Solution: What You Can Do Now Activities in Charts Industrial users struggle to correlate data trends with real-world operational events. Overlay activity types, such as work orders, directly onto time series in Charts to instantly connect anomalies with the "why". Documentation 3D MiniMap When navigating 3D scenes in Search, it is easy to lose orientation and real-world context. Use customizable real-time minimaps to track camera position and "fast-travel" through the scene by clicking destinations on the map. Documentation 3D Experience in Search Users previously lacked 3D scenes in search previews, leaving them without context for model placement. View 3D Scenes and 360-degree images directly in the search preview to gain immediate context for a selected Location. Documentation  Smarter Search & Precision ToolsWe’ve overhauled our search and resource management to prioritize accuracy and ease of use. Feature Problem (Challenge) Solution: What You Can Do Now Smarter Search Results (opt-in beta, and GA April 2026) Standard search often fails to handle partial industrial tags or returns broad, inaccurate results for complex strings. Benefit from advanced "Industrial Parsing" and prefix matching that prioritizes result accuracy over quantity. File Upload via Search SMEs lacked a simple way to upload individual files, such as RCA templates, without relying on technical pipelines. Upload single files directly through the Search UI and use Search as a resource selector in tools like Industrial Canvas. Unit-Aware Records (Beta) Users must manually convert sensor values when querying, filtering, or displaying data across different unit systems. Perform unit-aware querying, filtering, and aggregation within Records to eliminate the need for custom unit-conversion code. Documentation   Atlas AI: Continuity & ControlOur AI agents are now more persistent and easier to manage within your existing workspace. Feature Problem (Challenge) Solution: What You Can Do Now Conversation History Atlas agents didn't retain session history, forcing users to restart workflows and lose context. Continue past chats where you left off or work across multiple parallel conversations without losing context. Per Agent Granular Access Control Previously, all agents were visible to every user, and only creators could modify them. Configure specific read, write, and run rights on a per-agent basis to ensure users only interact with authorized agents.  Performance & ReadinessWe continue to optimize our core tools to ensure they are production-ready and provide full transparency into pipeline health. Feature Problem (Challenge) Solution: What You Can Do Now Canvas Performance Increased load on Industrial Canvas caused sub-optimal load times and high memory consumption. Experience a 50% average reduction in load times and up to 40% less memory usage during loading. 3D Browser Caching Increasing 3D data volumes led to performance degradation and suboptimal load times for large scenes. Utilize client-side browser caching for CAD and point cloud data to enable 63% faster 360 image loading and 52% less data transfer, providing instant loading for returning users Tag Detection Tag detection wasn't ready for Data Modeling, blocking users from migrating from asset-centric models. Use production-ready tag detection for Data Modeling with built-in per-file parsing status. Documentation  Full Metrics in Run History The UI previously showed limited metrics, making it difficult to fully debug issues or track specific rate limits. Access the full library of API-tracked metrics, including searchable selectors, directly within the Transformations Run History UI.  Looking Forward: Production-Grade Tailored SolutionsBeyond this release, we are building a future where tailored industrial applications are delivered with unprecedented speed and precision. We are evolving our platform to support production-grade, tailored solutions that can be deployed natively in just days. Rapid Delivery of Industrial WorkflowsWe are targeting the "long tail" of industrial use cases, thousands of small and medium-sized workflows that deliver significant value but were previously too complex to build.Tailored Experiences: Moving from generic interfaces to AI-first, agentic user experiences designed for specific roles like reliability engineers or maintenance supervisors. Native Deployment: Fully featured applications deployed directly as part of the Cognite product stack. Immediate Value: Targeting 100X faster deployment times and up to 80% lower costs by automating the generation of industrial applications.Compounding the Value of Your Industrial DataThis new capability is built on the foundation of the Industrial Knowledge Graph and Atlas AI. Instead of creating new data silos, these tailored applications write insights and outcomes back into the knowledge graph. This ensures that every workflow contributes to a growing, unified body of industrial knowledge that can be scaled across the entire enterprise. Ready to dive deeper? Check out our Release Notes  

Related products:Product Releases

Try out Project settings to tailor the search experience

Hi everyone!We know that for an end-user, finding data shouldn't feel like a "needle in a haystack" mission. Last summer, we launched Per-Project Search Configuration, and if you haven't set it up yet, your team might be missing out on a much cleaner experience.As a Project Admin, you can curate what your users see when they sign in to Search. Instead of a generic view of data that maps 1:1 to your detailed data model definition, you can hand-deliver the most relevant data. Why configure your categories? Instant clarity: Define exactly which columns appear first. No more side-scrolling to find the "Status" or "ID." Relevant filtering: Hide the noise. Display only the filters that actually matter for that specific category. Context at a glance: Tailor the "Properties" card so the most crucial metadata is front and center. Quick guide: Check access: Ensure you have appconfig:read/write and app-scope = Search. Your users also need appconfig:read to make the config take effect for them.  Finding the config options: Go to Admin workspace > Project Settings > Categories. Configure: Pick a category and hit + Add.  Pro Tip 💡: After you update the columns, remind your users to hit "Reset" in their search column selector to see your shiny new layout!Ready to clean up your search? Full Guide here: https://docs.cognite.com/cdf/configure/project_settings/  Have you tried? Let me know if you have any feedback!

Related products:Search and Data Exploration

Enhanced Insights: Full Metrics in Transformations Run History

Understanding the inner workings of your data pipelines is key to taking action to improve performance and debug issues. To give you better visibility, we’ve unlocked the full library of transformation metrics directly in the Transformations Run History UI.Previously, the UI only showed a few key metrics. Now, you can access everything the API tracks-from specific rate limits to granular resource updates-right from the graph.Navigate to the Run history tab in the Transformations UI, and select more metrics as shown in the images below.  What’s New?Searchable Metric Selector: Use the new dropdown to find exactly what you need. With the new search bar, you can quickly filter through long lists of metrics if you're working with multiple tables or data models. Smart Defaults: To keep things clean, your most important metrics (like reads, updates, and total requests) are still shown by default. Everything else is just a click away.Identifying Efficiency GainsThis update makes it easier to track the instances.upsertedNoop metric we recently introduced. By comparing "upserted" vs. "upsertedNoop," you can see exactly how much data is being re-written unnecessarily. High "No-op" counts are a clear sign that you can save time and compute costs by using smarter, incremental loading.Give it a try!Head over to your Transformations Run History today to explore these new insights. We’re always looking to improve, so please share your questions or feedback in the comments below! 

Explore our latest course enhancements and the new Simulator Integrations microlearning series

Explore our latest course enhancements and the new Simulator Integrations microlearning series

Microlearning videos: Small lessons, big impactWe’ve added a new set of bite-sized microlearning videos focused on simulator integrations in CDF. These short lessons cover key concepts and practical steps, making it easier to learn at your own pace.Simulator Integrations: Core ConceptsDiscover how simulator integrations support optimization and digital twins by connecting simulation tools to CDF, and how these building blocks work together to run and manage simulations effectively.The Simulators API: Key Resources and Data FlowExplore how simulations are created, controlled, and tracked in CDF- from defining models and routines to running simulations and reviewing results and logs.Configuring a Simulator ConnectorLearn how to set up a Windows-based simulator connector so CDF can securely communicate with your simulation tools, including authentication, network access, and service configuration.Configuring Simulator RoutinesLearn how to define reusable simulation setups that control how simulations run, covering inputs, outputs, scheduling, and execution logic.Data Sampling and ValidationFind out how to ensure your simulations run on reliable, high-quality data by configuring time windows, sampling strategies, and validation checks to catch unstable or invalid inputs.Triggering Simulation RunsUnderstand how to start simulations manually, automatically, or programmatically - and how to review outcomes using logs and simulation results. How-To Guide articles: Practical Tips from the ExpertsWe’ve added a fresh set of how-to guides. These articles are designed to help you troubleshoot faster, work more efficiently, and get the most out of CDF.How to fix Video Playback Failure – Error Code: 232011Get back to streaming smoothly by resolving the common 232011 playback error. This guide walks you through clearing cache and cookies, checking browser settings, and avoiding common conflicts.How to: Understanding CDF User Access and Entra ID ConnectivityTake a closer look at how CDF handles orphaned content and access loss, along with admin best practices to ensure critical data remains accessible.How to Build Efficient Transformations in CDFImprove performance and scalability in your CDF transformations using proven techniques like early data filtering, incremental logic with is_new, and CDF Workflows for complex jobs.How to Upload Large Files to CDFConfidently upload files larger than 5 GiB using Multipart Upload. Learn how to split data into chunks, enable parallel uploads with retries, and implement the flow using the Cognite Python SDK.How-To: Getting started with the CDF File Extractor from local folder to data modelSet up the CDF File Extractor on Windows to ingest local files into CDF. This guide covers downloading the extractor, configuring authentication, and mapping files to Core Data Model objects.How-To: Getting started with the CDF DB Extractor to populate staging (RAW) with records from a CSV fileIngest local CSV data into CDF Staging (RAW) on Windows using the Cognite DB Extractor. Learn how to configure authentication, treat CSVs as spreadsheet databases, and populate target RAW tables.How-To: Use a CSV file to populate data points for a time series (data model) with the CDF DB ExtractorLoad CSV data directly into CDF Time Series on Windows. This article explains how to map timestamps, values, and external IDs to create or update CogniteTimeSeries objects.How to: Transformation Execution Timestamps ExplainedDemystify CDF transformation timestamps - Created Time, Started Time, and Duration - so you can better understand performance without confusion from queuing delays.How to add filtering in Cognite OPC UA ExtractorGain more control over OPC UA data ingestion by defining effective filters. Learn how to use the Include transformation and full tag hierarchies for accurate extraction.How to resolve 'Bounding Box is empty' error when uploading a .LAZ fileFix this common 3D point cloud issue by re-saving LAZ files with compressor 2 or converting them to E57 using tools like CloudCompare.How to resolve 'This Location contains no files yet' error in diagram parsing for data modelingEnsure files are visible for Diagram Parsing in CDF Data Modeling by correctly including them in the data model and linking them to a Location with the right instance spaces. Master Cognite Data Fusion with new training on Cognite Academy. We’d love to hear your thoughts and help you connect with other experts. Come join the conversation on Cognite Hub. 

Related products:Academy Trainings

Cognite Atlas AI Agent Builder Fundamentals

We have released a new course dedicated to building reliable, industry-grade agents within Cognite Atlas AI.This course is your starting point for building, configuring, and deploying Atlas AI agents in Cognite Data Fusion. You will gain a comprehensive overview of how agents function, how they connect to industrial data, and how to leverage them for smarter automation and decision-making.This course focuses specifically on how to build agents that adhere to strict operational logic suitable for your use-case and industry standards. We demonstrate how to properly configure tools, select the right models based on reasoning benchmarks, and use the instructions box to enforce standard operating procedures.Key topics covered in this course:Tool Configuration: How to grant agents access to CDF data through specific tools, creating an assistant that genuinely understands physical assets. Model Selection: Utilizing industrial benchmarks to select the right models for complex reasoning and precise filtering. Operational Instructions: Using the instructions box to implement SOPs that keep agents methodical and verified. Advanced Querying: How to use the Query Knowledge Graph tool to customize your agent for specific data types.By the end of this course, you’ll be equipped to design agents that rely on your data and documents as a source of truth. This approach ensures your AI assistants remain predictable and safe for industrial workflows.Access the full course on the Academy here 

Related products:Academy Trainings

Introducing NEAT 1.0: Build Better Data Models, Faster!

We are on a mission to help everyone build and deploy better, more scalable, and Atlas AI-ready data models in Cognite Data Fusion. Today, we introduce NEAT 1.0 as an official part of our product tooling. Why NEAT 1.0 Matters A reliable, scalable, and usable data model is the bedrock of the Knowledge Graph in Cognite Data Fusion. NEAT 1.0 is designed to give you confidence that your models are usable, scalable, extendable, and performant. We've taken years of deep product expertise and hard-earned experience from our most demanding customers and turned it into concrete feedback and guardrails you can leverage in your automated development workflow, giving you help exactly when you need it.What's New?  Faster & Focused: We concentrated on physical data modeling and made NEAT significantly faster than previous versions. We also adopted a clean, object-oriented approach for interacting with features. Better Quality Checks: We've included a ~2x larger library of data model validators (constantly growing at thisisneat.io/validation). This means more checks to ensure your model is viable, scalable, and maintainable. Deep Pre-Deployment Analysis: The dry-run feature was completely rebuilt to give you a deep, clear analysis of the changes that will occur, including a severity score, before you even push your model to CDF. Easy Issue Navigation: You get rich analysis of data model issues with interactive navigation, search, and clustering, making it simple to find and fix problems. Ready for Governance: We take a user-centric approach to anything graph related, offering four pre-built, configurable data model governance profiles to jumpstart your quality assurance.What About Legacy Features? NEAT 1.0 embraces focus and minimalism. To make this release more performant and easier to maintain, we removed some features from older, legacy versions that did not align with our new, streamlined approach. Legacy features continue to be available from cognite.neat.legacy import NeatSession while we continue to port these features to V1. Getting started?Watch the short video and check out https://cognite-neat.readthedocs-hosted.com/en/latest/installation.html to try for yourself!

Related products:Product Releases

Q4 2025 Product Release: Delivering Faster Troubleshooting, Higher Data Quality, and Full-Scale Optimization

2025: Delivering Industrial Scale and ReliabilityThis year, we turned industrial potential into enterprise reality. Every release, from Q1 to Q4, was designed to help you build and run AI and data solutions at unprecedented scale.We focused on three pillars:Data Foundation at Scale: Handle billions of records with ease and accelerate model creation to eliminate bottlenecks. Workflow Automation: Introduced visual orchestration and enhanced validation for reliable, high-volume processes. AI You Can Trust: Moved AI agents from pilot to production with governance, monitoring, and precision tuning for enterprise-grade reliability.Our Q4 release completes this journey, delivering the critical building blocks that make CDF the only platform purpose-built for industrial scale.Operational Excellence & Field Work Replacement Faster, Cleaner Troubleshooting in Industrial Canvas (Beta) Get Immediate Context: Location Prompt (New User Experience) Seamless Planning for Frontline Teams (Beta) Smarter, Faster 3D Measurements (Replace Field Work) Customized 3D Experience (Reduced Friction) Full 3D Support for Hybrid Projects Unlocking Scalability and Optimization Massive Log and Event Data Storage: Records API Full-Field Production Optimization (Beta) Driving Data Quality & Model Confidence High-Quality Field Observations Reliable Workflow Development Total Process Visibility (Workflow Triggers) Transparent Document Parsing (LLM Vision) (Beta) Streamlined Annotation Review for Diagrams (Beta) Identify and Optimize Transformation Inefficiency Agent Reliability and Precision (Atlas AI) Data-Driven Agent Confidence Precise and Predictable Agent Queries Platform Foundations Enhanced Japanese Search  Operational Excellence & Field Work ReplacementStreamline field work, enable safe remote operations via 3D, and boost efficiency with improved operational context and real-time troubleshooting.Faster, Cleaner Troubleshooting in Industrial Canvas (Beta)  Problem Solution: What You Can Do Now Cluttered Canvas: Complex diagrams became unreadable due to overlapping connection lines, slowing down troubleshooting. Instant Flow Tracing: The canvas is now cleaner. Hover over any connection path to instantly highlight that specific flow, increasing user efficiency when troubleshooting. Get Immediate Context: Location Prompt (New User Experience)  Problem Solution: What You Can Do Now Overlooked Context: New users often missed the critical Location filter, leading to poor data context and frustration. Contextual Prompting: If a location is not selected, a pop-up prompt appears, directing users to the filter. This ensures a greater set of users set the right context immediately, improving the data quality of their work. Seamless Planning for Frontline Teams (Beta)  Problem Solution: What You Can Do Now Limited Schedule Transparency: Supervisors lacked full visibility into what was planned for the upcoming weeks and control over what was published to frontline teams. Full Schedule Control: The new Schedules tab provides full transparency into scheduled checklists. Supervisors can decide what is published to the frontline teams and when, ensuring priorities are met. Users can also create checklists from the schedule Smarter, Faster 3D Measurements (Replace Field Work)  Problem Solution: What You Can Do Now Manual Measuring: Measuring basic dimensions like pipe diameters was laborious. One-Click Precision: Perform one-click diameter measurements for pipes, vessels, and tanks. This increases the viability of using 3D to replace field work. Customized 3D Experience (Reduced Friction)  Problem Solution: What You Can Do Now Manual Tweaking: Hardcoded default 3D settings forced users to manually adjust them per session. Customizable Settings Per Scene: Define default Model Visibility, Rendering Quality, and Point Cloud settings. This reduces friction for regular users and improves the first-time experience. Full 3D Support for Hybrid Projects  Problem Solution: What You Can Do Now Migration Bottleneck: 3D was not fully supported by the Cognite Data Model (CDM) or hybrid projects, blocking customer migrations Seamless Hybrid 3D: New 3D service API endpoints for CDM enable a seamless 3D user experience in hybrid projects. This enables migration to CDM and allows contextualization using both asset-centric and data modeling assets. Unlocking Scalability and OptimizationHandle billions of records and unlock full-field industrial optimization that was previously impossible, accelerating enterprise value realization.Massive Log and Event Data Storage: Records API  Problem Solution: What You Can Do Now Scaling Limitations: Billions of high-volume log entries and events overburdened the Knowledge Graph node structure, limiting scalability. Records API: Store billions of structured records with seamless integration to Data Modeling. This unlocks the next level of CDF scalability. Full-Field Production Optimization (Beta)  Problem Solution: What You Can Do Now Blocked Use Cases: Current CDF capabilities couldn't handle optimizing entire offshore fields with thousands of wells simultaneously. Large-Scale Workflow Support: Run complex simulations across entire facilities, handling thousands of parameters. This unlocks true field-wide production optimization. Driving Data Quality & Model ConfidenceGuarantee reliable data ingestion, enable better decision-making with high-quality observations, and validate complex workflows before deployment.High-Quality Field Observations  Problem Solution: What You Can Do Now Incorrect Asset Links: Operators struggled to attach the correct asset to field observations, reducing data quality. Correctly Contextualized Observations: Access the asset hierarchy to browse and find the correct asset on both mobile and desktop. This ensures observations are captured with the correct asset, leading to better decision making. Reliable Workflow Development  Problem Solution: What You Can Do Now Trial-and-Error: Complex definitions made subtle errors difficult to find, leading to a frustrating configuration. Actionable Validation: Perform comprehensive on-demand validation and auto-validation on publish. Errors are surfaced, explained, and made actionable directly in the UI. Total Process Visibility (Workflow Triggers)  Problem Solution: What You Can Do Now Hidden Triggers: Managing live pipelines was risky without clear visibility into automated starting points. Workflow Triggers as Nodes: Triggers are now shown as canvas nodes for full workflow visibility. This delivers a true end-to-end visualization and enables safe management of live pipelines Transparent Document Parsing (LLM Vision) (Beta)  Problem Solution: What You Can Do Now Verification Challenge: Users couldn't easily verify LLM parsing results because extracted values weren't visually linked to the source document. Visual Verification: Extracted values are displayed with bounding boxes in the UI, making verification easier. You can also store data in a user-specified space. Streamlined Annotation Review for Diagrams (Beta)  Problem Solution: What You Can Do Now Slow and Cumbersome Review: Users lacked an efficient way to verify and reject annotations, especially in bulk, and faced limitations when dealing with multi-page files. Dedicated Review Workflow: The UI tabs are split into four distinct sections, with the first tab dedicated solely to reviewing annotations. This allows users to verify or reject file and asset annotations individually or in bulk and enables annotation management across multi-page files, significantly improving focus and workflow efficiency. Identify and Optimize Transformation Inefficiency  Problem Solution: What You Can Do Now Wasted Resources: Transformations often re-process and re-write large amounts of identical data, wasting compute resources and masking optimization opportunities. Monitor "No-Op" Performance: The Run History UI now displays a "No-Op" metric, showing how many write operations were skipped because the data was unchanged. This instantly identifies inefficiency and drives smarter, incremental data loads. Agent Reliability and Precision (Atlas AI)Move AI agents from pilot to production by providing tools to monitor performance, guarantee output precision, and build trust at scale.Data-Driven Agent Confidence  Problem Solution: What You Can Do Now Blind Deployment: Lack of visibility into agent performance or failures made it hard to build trust or troubleshoot. Agent Performance Monitoring: Create test sets and run evaluations to view pass/fail results. This enables data-driven confidence in agent reliability and supports smoother UAT. Precise and Predictable Agent Queries  Problem Solution: What You Can Do Now Unpredictable Output: Agent queries could result in slightly different data or format, creating uncertainty in production workflows. Guaranteed Precision: Configure a specific query and lock fields, operations, and output. This guarantees critical data is retrieved correctly and consistently every time.  Platform FoundationsFundamental improvements to core platform elements, including search quality and the initial user experience.Enhanced Japanese Search  Problem Solution: What You Can Do Now Poor Japanese Search: The existing search didn't account for the unique structure of Japanese, leading to poor relevance. Morphological Analysis: Enhanced search capability uses Morphological Analysis to provide relevant results, balancing precision and recall.  We believe these new capabilities will significantly accelerate your industrial data journey. Dive into the detailed release notes to explore all the new features and improvements.We'd love to hear how these features are transforming your operations! 

Related products:Product Releases
The Atlas AI™ Security & Trust Framework

The Atlas AI™ Security & Trust Framework

We've just published The Atlas AI™ Security & Trust Framework , a new white paper detailing how we've engineered our low-code industrial AI agent workbench for the rigorous security demands of industry leaders.We know security and data privacy are critical when adopting generative AI. This framework outlines how Atlas AI is secured by design, with security as an intrinsic part of its architecture, not an afterthought. Key highlights from the framework: Your Data Stays Your Data: We have contractual guarantees that your prompts and responses are not stored by third-party LLM providers and are never used to train or improve their models. Your Access is the Agent's Access: Atlas AI integrates with your corporate Identity Provider (IdP) for SSO. Critically, all API calls an agent makes are mapped directly to your user roles and permissions. The agent can only see and do what you can. Enterprise-Grade Security: The platform is built on a "defense-in-depth" architecture , employs end-to-end encryption (at-rest and in-transit) , and undergoes continuous 24/7 monitoring and third-party penetration testing. Independently Verified: Atlas AI inherits the comprehensive security and compliance of Cognite Data Fusion®, including our ISO 27001, ISO 9001, and SOC 2 Type 2 certifications.  Please find the full white paper here for a deeper dive. We're committed to being your trusted partner in Industrial AI and welcome your questions in the comments.

Related products:Security Updates
What's New in Cognite Academy: Q3 2025

What's New in Cognite Academy: Q3 2025

This quarter’s release features new training on building industrial AI agents, enhancing 3D environments, improving data workflows and an all new and improved CDF fundamentals course to help you make better decisions and get more value from your data. Courses: Build In-Depth Skills, Step-by-StepCognite Data Fundamentals with Data ModelingA new and improved CDF Fundamentals course is now available. This introductory learning path is perfect for anyone new to Cognite Data Fusion. The updated content now covers the Core Data Model, hands-on exercises with Industrial Canvas, and an overview of Atlas AI. Complete the course to earn a shareable badge and certificate.3D Configuration and ManagementThis course is designed to give you the skills to set up and configure 3D environments and integrate them into your daily work. You will learn to use 3D solutions to improve visualization, enhance decision-making, and boost operational efficiency.This course is for anyone interested in learning about 3D technology and applying it in their professional field.Microlearning videos: Bite-Sized LearningsAI Agents in Industrial WorkflowsDiscover how Cognite's industrial AI agents, powered by Atlas AI, can help your teams act faster and with greater confidence. Learn how to leverage contextualized industrial data to streamline troubleshooting, automate routine tasks, and enhance operational efficiency.Building Agents in Atlas AIMaster the process of building and customizing industrial AI agents using Cognite Atlas AI. This low-code workbench enables you to automate complex tasks by connecting to contextualized data in Cognite Data Fusion. See how to create, configure, and deploy agents to analyze time series, retrieve documents, and support decision-making—all without writing a single line of code.Contextualizing Data for AIUnderstand how to transform raw industrial data into a scalable, connected foundation for advanced workflows. Learn to build and enrich a Knowledge Graph using tools like Data Workflows, diagram parsing, and document parsing, enabling seamless reuse of data and powering robust AI agents and applications. How-To Guide articles: Practical Tips from the ExpertsChoosing the Right Extractor for Your CDF IntegrationCognite Data Fusion offers a wide range of extractors to integrate data from industrial and IT systems into a unified platform. These are divided into two main types: Prebuilt Extractors, installed within customer infrastructure to connect with systems like OSI PI, OPC UA, databases, SAP, and document libraries; and CDF-Hosted Scalable Extractors, managed in the cloud for high-volume, real-time data ingestion from sources such as Azure Event Hub, MQTT, Kafka, and RESTful APIs.Understanding Function Memory Quotas in CDF Learn best practices to prevent memory errors in CDF functions by designing workloads with memory limits in mind. This guide explains why breaking tasks into smaller, manageable chunks is the most effective approach.How-To: Simple Streamlit App for CRUD Operations on CDF Data Model viewThis guide shows you how to create a simple Streamlit web app to manage data model instances in Cognite Data Fusion (CDF) using the Python SDK. Learn to create, read, update, and delete (CRUD) instances in real time—perfect for demos, quick data manipulation, or onboarding users to structured data modeling.Getting started with CDF performance testing This guide explains the importance of performance testing in Cognite Data Fusion (CDF) and how to assess key metrics like data ingestion rates, query performance, and API response times. It includes example notebooks to help you test efficiently, optimize costs, and ensure production readiness.Changing Your Identity Provider (IdP): Playbook and Key Information This guide explains the relationship between projects, organizations, and Identity Providers (IdPs) in Cognite Data Fusion. It covers the implications of changing an organization’s IdP, including session invalidation, user ID reassignment, and potential data access issues, and provides guidance on planning and requesting IdP changes safely.Unsubscribing from Cognite Charts AlertsThis guide explains how users can unsubscribe from Cognite Charts alerts and notifications, ensuring they no longer receive emails when time series thresholds are breached.Fixing Video Playback Failure – Error Code: 101102This guide helps troubleshoot Error Code 101102 when a training video fails to load. It covers common causes—like network restrictions, browser extensions, VPNs, or outdated browsers—and provides step-by-step solutions to restore smooth video playback.Optimizing Data Ingestion PerformanceLearn best practices to improve data ingestion performance in Cognite Data Fusion, including batching datasets, using progress monitoring, efficiently handling asset hierarchies, leveraging delta loads with the is_new function, and scheduling transformations for faster, more reliable workflows. Product Updates: What's New to Enhance Your WorkflowImproving Cognite Functions stability with new rate limitsWe're implementing new rate limits for Cognite Function calls to improve system performance and stability. A new limit of 250 concurrent running calls per CDF project will apply immediately to new projects and will go into effect on November 1, 2025, for existing projects. This change will affect a small number of projects with very high usage, ensuring a more equitable distribution of resources.Power BI REST Connector now Generally AvailableThe Power BI REST API connector is now Generally Available (GA), offering better performance and broader capabilities than the legacy OData connector, which will be retired on August 18, 2026. The new REST API connector provides broader authentication support, enhanced data access, and a significant performance boost of up to 10x, and should be used for all new reportsNew features for Cognite's GAP ConnectorWe've added two new capabilities to the GAP connector that address common challenges Petroleum Engineers face when working with GAP simulation models in production environments.Information Model Extraction The GAP connector can now automatically extract detailed flowsheet information models from simulator model revisions. This makes flowsheet data, including equipment, properties, and connections, accessible for analysis directly in CDF, eliminating the need for separate simulator access. External Dependencies Support This new feature allows you to manage simulation models by linking separate CDF files for components like well models or VLP tables, instead of repackaging the entire model. This simplifies updates, reduces storage overhead, and enables different team members to work on components independently. Explore new trainings on Cognite Academy. For guides, community, and to share your ideas, visit Cognite Hub. 

Related products:Academy Trainings

New features for Cognite's GAP Connector

We've added two new capabilities to the GAP connector that address common challenges Petroleum Engineers face when working with GAP simulation models in production environments.Information Model ExtractionThe GAP connector now can automatically extract detailed information models from simulator model revisions, making flowsheet data accessible for analysis without requiring direct simulator access. When you upload a GAP model revision, the connector can parse the complete flowsheet hierarchy, capturing all functional blocks such as wells, pumps, valves, pipes, and tanks along with their operating parameters, physical properties, and configuration settings. The system also extracts connection relationships between equipment and graphical positioning information for visualization processes.This extraction is configurable through a modelparsing.config.yml file where you define which properties to extract from different equipment types. You can specify common properties that apply to all equipment (like oil rates, water rates, and gas rates from solver results) as well as equipment-specific properties (such as well model types, pipeline correlations, or valve characteristics). Each property definition includes the data type, units, and the GAP address suffix used to retrieve the value from the model.The parsing process activates automatically when a new model revision is created or when you reparse an existing revision, provided a valid configuration file exists. The connector loads the GAP model, discovers all equipment instances, extracts the configured properties along with unit information, analyzes equipment connections to generate material flow relationships, and stores the complete information model in CDF alongside the simulator model revision.The parsing adds some processing time during model validation, with the duration depending on model complexity. If property extraction fails for individual items (due to invalid addresses, incorrect unit quantities, or wrong value types), the connector logs the errors and continues processing rather than failing the entire operation.For configuration instructions and examples, see the public documentation.External Dependencies SupportThe GAP connector now supports a new external dependencies mode that changes how you manage simulation model components. Traditional GAP workflows require packaging everything into single .gar archive files, which can get quite large for big networks. Any change to individual components like well models or VLP tables necessitates repackaging and re-uploading the entire bundle, creating storage overhead and making frequent updates impractical.The external dependencies mode allows you to upload .zip files containing only the GAP network files (production and injection networks) while mapping individual node dependencies like .OUT, .VLP as separate CDF files. When creating a model revision, the user provides a mapping structure that links each external dependency file to its proper location in the GAP model using the simulator's address notation. For example, you can map a PROSPER .OUT file to GAP.MOD[{PROD}].WELL[{A1}].File and a .VLP file to GAP.MOD[{PROD}].WELL[{A1}].VLPFile.This approach enables to update individual well models, VLP tables, or other components without rebuilding entire model bundles. Different team members can work on different components independently, and you avoid duplicating unchanged files across model revisions, significantly reducing storage costs. The connector handles the complexity by downloading the network files, processing the external dependencies mapping, downloading each referenced CDF file separately, resolving file paths using the provided arguments, and assembling everything into a complete simulation model.For complete setup instructions covering both the new external dependencies mode and the single bundle mode, refer to the the public documentation.These features are currently in beta and subject to change based on user feedback. Access the parsed information through the existing simulator model revision endpoints, and configure external dependencies through the standard model revision creation process.

Related products:Simulator Integration

Cognite Data Fusion - Q3 2025 Product Release

Dear community,We are excited to share highlights from the Q3 2025 release of Cognite Data Fusion. This post will walk you through: Atlas AI becoming smoother, smarter, and more governed New tools for operators in the field to plan and act on checklists seamlessly Richer context in Charts, 3D, and simulator integrations to connect data with real-world operations And there’s much more to explore. Dive into the latest release notes and uncover all the features and improvements designed to accelerate your industrial data journey. These brand new additions will be available starting September 2nd. We’d love to hear your feedback here on Cognite Hub, and we thank all our community members for your contributions so far. Keep the ideas coming — your input helps us shape the future of Cognite Data Fusion.Atlas AI Smoother Agent Interactions Smarter Knowledge Graph Queries Secure & Governed Agent Operations Accelerating Time-to-Value with Templates Document Parser Enhancements Industrial Tools  Checklist Scheduling Ad-Hoc Checklists from Mobile Templates Activities Visualization in CDM Projects Smarter, Faster Ways to Explore Data Support for CDM and Hybrid Projects Automatic Tag Detection for 360° Images New Measurement Tools Power BI App 3D Viewer Extract Information Model from Simulators External Dependencies in Model Revisions Data Management Configurable Tag Detection UI Annotation Box Adjustments Core Data Model Support for DB Extractor Admin App-Scoped Access  Atlas AISmoother Agent InteractionsWe’ve evolved the Atlas AI experience to feel like a natural extension of CDF. Cross-agent chat now enables users to carry conversations fluidly across agents, reducing friction and keeping work moving. A refreshed UI aligns visually with CDF for smoother, more consistent interactions. Together, these updates strengthen trust and make agents an integral part of daily workflows, driving higher engagement.Documentation Smarter Knowledge Graph QueriesThe new Agentic QKG v2 unlocks richer, engineer-like insights. By supporting multi-step queries, diagram annotations, and edge properties, agents can now handle nuanced industrial questions that previously ended in dead-ends. This makes it easier to retrieve deep context and uncover relationships across complex datasets.Documentation Secure & Governed Agent OperationsEnterprise customers now have stronger governance. New access controls (agents:read, agents:write, agents:run) define who can view, create, or execute agents. High-impact tools like Python execution and REST API calls require explicit confirmation before running. This prevents accidental execution and strengthens compliance and security. Documentation Accelerating Time-to-Value with TemplatesBetaWe’re launching ready-to-use agent templates like Data Scout (for instant industrial data retrieval in Canvas) and Time Series Trender (for trend and root-cause analysis). These reduce time-to-insight and empower engineers to self-serve without waiting on data science teams.Documentation Document Parser EnhancementsBetaParsing is smarter and more flexible. Users can leverage LLM vision capabilities, set custom parsing instructions in UI or API, select page ranges, and extract multiple instances. This unlocks more data trapped in files while giving customers more control and performance.Documentation Industrial Tools Checklist SchedulingBetaA new Schedules tab allows supervisors to plan upcoming checklists by shift, day, or week. This ensures the most critical work gets prioritized, giving supervisors flexibility to optimize operations and resource allocation.Documentation Ad-Hoc Checklists from Mobile TemplatesNot all fieldwork follows a schedule. With mobile checklist templates, operators can now generate unique, ad-hoc checklists directly in the field. Filtering options make it easy to find the right template, and mobile support means no dependency on desktop systems. This empowers operators to respond to changing conditions on the spot. Activities Visualization in CDM ProjectsBetaCharts now allows engineers to overlay activities (like maintenance or process events) directly on top of time series. By clicking an activity, users can view full details instantly. This bridges the gap between “what happened” (data anomalies) and “why it happened” (operational context), speeding up troubleshooting and decision-making. Smarter, Faster Ways to Explore DataSearch and browse just got a major upgrade. Users can:View denser tables with frozen headers. Resize preview panels for quick file inspection. Apply alphanumeric sorting, Enum filters, and “select all” options. See clearer tooltips and error handling.The result is faster, more comfortable navigation, letting users spend less time finding data and more time analyzing it.Documentation:Introduced Space and External Id across filters, columns and propertiesFilters/columns: “Select all” and filtering on enum propertiesImprovements to the search configuration UI Support for CDM and Hybrid ProjectsWe’ve closed a key migration gap: hybrid CAD projects and CDM point cloud contextualization are now fully supported via API. This unblocks major customer migrations and accelerates the transition from classic to CDM.Documentation Automatic Tag Detection for 360° ImagesUsers can now run text detection on 360° images to automatically generate annotation suggestions with bounding boxes. This reduces manual contextualization time and encourages more frequent updates to this rich data format.Documentation New Measurement Tools3D becomes more practical with enhanced measurement tools: horizontal cylinder measurement (ideal for exchangers and flanges), coordinate-based measurement, and improved UX for precision. These improvements reduce the need for on-site field work.Documentation Power BI App 3D ViewerBetaRequested by customers like Aker, this Power BI app allows direct viewing, rotating, and measuring of CAD models inside dashboards. Users can now correlate Power BI data with 3D spatial context, providing a richer lens for analysis.Documentation Extract Information Model from SimulatorsBetaA long-standing challenge has been the opacity of simulator files. This feature automatically extracts flowsheet hierarchies — equipment, properties, connections — into CDF. With initial support for GAP, engineers and AI systems can finally see and use this data without opening proprietary software. External Dependencies in Model RevisionsBetaInstead of uploading massive monolithic archives, users can now upload smaller ZIPs and manage dependencies as separate CDF files. This saves storage, avoids duplication, and makes updates much faster — particularly critical for large production networks.Documentation:Integration DocumentationSetting up a GAP Connector Data ManagementConfigurable Tag Detection UIBetaThe diagram parsing UI now offers a configuration interface. Users can fine-tune parameters (like minTokens and searchField) and scope which assets or files are searched. This delivers more accurate and relevant results without custom scripts. Annotation Box AdjustmentsBetaWhen annotation boxes shift after file version changes, users can now move and scale them directly in the UI — even in bulk. This saves manual rework and keeps annotations aligned with minimal effort. Core Data Model Support for DB ExtractorThe DB Extractor now writes directly into CDM instance spaces, whether for Time Series, File concepts, or any other data model nodes and edges. This simplifies onboarding of database sources, shortens time-to-value, and makes CDM a first-class destination for structured and unstructured data.Documentation AdminApp-Scoped AccessAdmins can now scope access capabilities to specific applications. By limiting which groups apply for each app, queries in large Data Models are faster and more efficient. This change strengthens performance without reducing flexibility.  

Related products:Product ReleasesAtlas AI

Power BI REST Connector now Generally Available

Hi everyone,We're excited to announce that the Power BI REST API connector is now Generally Available (GA), offering significant performance improvements and broader capabilities. At the same time, we're deprecating the legacy OData connector, which will be retired on August 18, 2026.What's changing:REST API connector promoted to GA:- Broader authentication support: Works with all identity providers supported by CDF (not just Azure Entra ID)- Enhanced data access: OData services, GraphQL for data models, and direct REST API endpoints- Up to 10x performance improvement when using direct REST endpoints- Full API coverage: Access data from any GA CDF API endpointOData connector deprecated:- Deprecation date: August 18, 2025- Retirement date: August 18, 2026- Existing reports will continue to work until retirement- All new reports must use the REST API connectorWhat this means for you:For new reports:- Use the REST API connector exclusively- Choose from three data access methods based on your needs:  - OData services for quick exploration (no coding required)  - GraphQL for data models (minimal coding)  - Direct REST for maximum performance (needs some Power Query coding experience)For existing OData-based reports:- Continue working normally until retirement date- Plan migration to the REST API connector before retirement- We've created a migration guide with multiple migration options ranging from minimal code changes to performance-optimized approachesResources:- REST API connector setup guide: https://docs.cognite.com/cdf/dashboards/guides/powerbi/set_up_rest_connector- Migration guide from OData: https://docs.cognite.com/cdf/dashboards/guides/powerbi/set_up_odata_connector#migration-guide- Deprecation timeline: https://docs.cognite.com/cdf/deprecated- Training (coming soon): While our documentation is ready for you to get started immediately, we're also developing dedicated Academy training courses covering the new REST connector features and migration best practices. These hands-on courses will be available soon to help you make the most of the enhanced capabilities.The REST API connector represents a significant advancement in how Power BI integrates with CDF, providing better performance, broader compatibility, and access to all CDF features. We encourage starting your migration planning early to ensure a smooth transition.For questions or support during migration, please reach out to your Cognite representative or post in the forum.

Improving Cognite Functions stability with new rate limits

Hi there,We're introducing new rate limits for Cognite Function Calls, affecting less than 5% of CDF projects based on current usage patterns.Cognite Functions has seen significant growth, and we've identified usage patterns with very high running call counts that are impacting system capacity and performance for all users. Our new rate limits will ensure more equitable resource distribution and maintain reliable service quality across all CDF projects.What's changing:We're introducing limits on concurrent function calls with a phased rollout: Current: Undocumented concurrent running call limits (based on underlying cloud provider) New: Maximum of 250 concurrent running calls per CDF project (independent of cloud provider) New projects: Limits apply immediately starting with this announcement Existing projects: Limits take effect November 1, 2025 Additionally, we're updating the HTTP response status code for the case where the limit of 100 concurrent running function calls per function is exceeded. The code will change from 400 to 429, aligning with HTTP standards. What this means for you: Most CDF projects won't notice any difference. The 250 concurrent running call limit supports typical usage patterns across most workloads. CDF projects currently exceeding this limit will receive direct outreach from our team with optimization guidance and transition support. We recommend reviewing your function concurrency patterns and consider optimizations like batching or queue management for high-volume scenarios. Upon reaching the limit, new function call requests will return HTTP 429 (Too Many Requests), signaling clients to back off and retry later. Some SDKs like the official Python SDK already handle automatic retries with exponential backoff for 429 responses, minimizing the need for code changes. We take these decisions seriously. We're committed to supporting high-concurrency use cases through architectural improvements and will continue to monitor usage patterns to ensure these limits serve our community effectively.We also recognize the importance of maintaining performance during this transition. Our team will be closely monitoring system behavior and is ready to provide direct support for projects that need assistance with optimization strategies.

Related products:Functions

Microlearning course on Contextualizing Data For AI is Live Now!

From Raw Data to Reliable Insight Unlock the power of contextualization in Cognite Data FusionIndustrial data is often scattered and siloed, making it hard to use, reuse, and scale across operations. That’s where Cognite Data Fusion (CDF) comes in.This course shows how CDF transforms raw data into a connected, contextualized foundation that fuels AI agents, applications, and decision-making across your organization.Check out our new microlearning on Contextualizing Data For AI Discover What’s Behind the Data That Drives AIWith Data Workflows, diagram parsing, and the Document Parser, your teams can:Automate the ingestion and transformation of raw industrial data   Extract meaningful insights from engineering diagrams and technical documents   Create connections between assets, files, and data models   Build a Contextualized Knowledge Graph that powers AI-driven decisions  To learn how to use data workflows in Cognite Data Fusion to automate and orchestrate complex data processes: https://learn.cognite.com/cognite-data-workflows  Explore How It All Comes TogetherThe course walks through key features like:Data Workflows to create reliable, scalable data pipelines   Diagram Parsing to extract relationships from P&IDs and link them to assets   Document Parser to capture structured information and enrich data models  Together, these services populate your Core Data Model and form a unified view of operations. Once contextualized, data becomes reusable across AI agents, apps, and third-party tools—no duplication, no silos. Build Once. Use Everywhere.With your contextualized data foundation in place:Teams stop spending time redoing the same tasks Data products become shareable and reusable AI agents gain the clarity and context they need to deliver trusted resultsTo learn more about contextualization in Cognite Data Fusion: https://learn.cognite.com/working-with-cdf-contextualizeLet’s build a smarter and more connected future, one dataset at a time. 

Related products:Academy Trainings

New Academy Offering: Microlearning course on Building Agents in Atlas AI

Build Smart Industrial Agents with ConfidenceIn complex industrial environments, timely decisions depend on fast and accurate access to data and analysis. With Cognite Atlas AI, your teams can do more than just find data. They can build AI agents that understand context, perform analysis, and take action.This course walks you through how Atlas AI’s low-code workbench makes it easy to create and configure intelligent agents grounded in real operational data.Check out our new microlearning on Building Agents in Atlas AIFrom Questions to Actions with No Coding RequiredWith the Atlas AI Workbench, you can build agents that:Retrieve data intelligently: Agents pull design specifications, time series, and documents using smart, contextual logic.   Perform advanced analysis: Let your agents write and execute code to perform tasks like deviation analysis, with no scripting required.   Take action automatically: Once configured, agents can recommend maintenance, place material orders, and update schedules, helping teams avoid delays.   Customize Agents to Your NeedsIn the course, you'll learn how to use tools like:Find Files to locate relevant engineering documents   Ask Document to extract answers using semantic search   Time Series Analysis to automate real-time equipment monitoring  Everything comes together in one platform, built on trusted contextualized data from Cognite Data Fusion, so your AI agents can work smarter and deliver reliable insights.  Scale AI Across Teams and WorkflowsOnce built, these agents are instantly available across Cognite applications like Canvas and Charts, and even external tools via Atlas API. That means:Faster decision-making More consistent troubleshooting Less time spent on manual data search and preparationLet’s empower teams with agents that understand the job and get it done. 

Related products:Academy Trainings

New Microlearning course on AI Agents in Industrial Workflows

Unlocking Actionable Intelligence with AI AgentsAt Cognite, we recognize that industrial workers often spend over half their time searching for the right data, time that should be spent on maintenance, operations, and decision making. While AI copilots and chatbots offer potential, their true value is realized only when they drive real action.That is where Cognite’s industrial AI agents, powered by Atlas AI, step in, streamlining workflows and enabling teams to move from insight to action in seconds.Check out our new microlearning on AI Agents in Industrial WorkflowsBringing AI into Daily OperationsWith Cognite’s Industrial Canvas, operators, engineers, and field teams collaborate with AI agents to execute specific tasks faster and with greater confidence. These agents help teams:Find data instantly: Search for operational, IT, and engineering data with simple, natural language queries.   Troubleshoot efficiently: Use AI to analyze issues in context, such as oil discharge, and recommend targeted actions.   Automate workflows: Create work orders, detect missing resources, and reschedule tasks without manual intervention.  To learn more about the concept of "Industrial Canvas" and to know how to use this tool effectively - https://learn.cognite.com/path/domain-expert-basics/industrial-canvas Streamlining Decision-Making with Contextualized AIThese AI agents are built to do more than just make suggestions. They take action. For example, a troubleshooting agent can identify abnormal oil viscosity, recommend a fix, trigger a material order, and adjust the maintenance timeline, all in one seamless process.With access to contextualized data from Cognite Data Fusion, agents can:Support consistent decision-making   Reduce manual back-and-forth   Deliver trusted insights, customized to each task  And because they operate across Canvas, other CDF applications, and third party systems, agents are always ready to assist at any time and from anywhere. From Task Execution to Scalable AutomationThese ready-made agents are just the beginning. With Cognite Atlas AI, teams can build and customize agents for their specific needs, automate repetitive tasks, enhance human expertise, and scale operations across different sites.Let us rethink the way we work, with intelligent agents by our side

Related products:Academy Trainings
Quarterly Training Updates on Cognite Academy- Q2 2025

Quarterly Training Updates on Cognite Academy- Q2 2025

We are excited to bring you the latest updates from Cognite Academy. This quarter’s release is packed with brand new content-all designed to make your learning journey smoother and more impactful. Get ready to explore exciting new resources, from cutting-edge AI agent building to streamlined data management and powerful data visualization.Unlock the Power of AI with Atlas AI Microlearnings:This microlearning series introduces how to create, configure, and deploy AI agents in Atlas AI, Cognite’s low-code agent builder for CDF. These agents can assist users in retrieving industrial data, performing troubleshooting tasks, and supporting operations-all using natural language.Build and configure Atlas AI agentMaster the art of building custom AI agents using language models, goals, instructions, and tools.Getting started with the Atlas AI agentsEnhance your agent's capabilities with powerful tools like Find Assets, Find Maintenance Orders, Find Files, Answer Document Questions, and Find Time Series.Integrate your Atlas AI agents with Industrial Canvas and ChartsDiscover how to leverage published agents within Industrial Canvas and Charts for collaborative asset investigations. Other Microlearning:Beyond Atlas AI, we've added more bite-sized learning opportunities to sharpen your skills.Managing Resources with SpacesThis course provides a comprehensive introduction to resource management within the Cognite Data Fusion application, focusing on the concept and practical use of Spaces. You will learn how to create, manage, and utilize Spaces effectively, ensuring secure and organized data governance. Transform Data into Core Data ModelLearn how to write data into the Core Data Model using the Transformation tool with this quick video.Create Timeseries in Core Data ModelExplore how to create time series in the Core Data Model using the Cognite Python SDK.Charts: Configuring Alerts and NotificationsMaster setting thresholds, configuring alerts, adding calculations, and creating synthetic time series in Charts. Courses:Deep Dive with Our Latest Courses. For more in-depth learning, check out these new and updated courses.Access Management in CDF: Best PracticeThis course provides a practical introduction to managing access in Cognite Data Fusion (CDF), with a focus on how to design secure, scalable, and role-based access structures that support real-world use cases.Industrial Canvas updateExperience the new and improved user experience in Industrial Canvas, our collaborative digital whiteboarding tool for exploring, visualizing, and analyzing industrial data. Product Updates:Some of the exciting Product Updates You Need to Know.Simulator integration: More flexible license handling for simulator connectorsSimulator connectors now support more flexible license handling, reducing disruptions during high-frequency or large-scale simulation runs. This update lets you better optimize license usage based on your operational needs and simulation patterns. Learn how it can improve your workflows.Data Workflows: Data Workflows UI is now Generally AvailableBuilding and managing industrial data pipelines just got more intuitive! The new Data Workflows UI in Cognite Data Fusion is now generally available, offering a visual orchestration tool that simplifies pipeline creation, monitoring, and management - whether you're a technical expert or a business user. Explore how you can stream line your data workflows. Community How-To Guides: Practical Tips from the Experts:Our community-driven how-to guides are packed with practical advice to help you succeed.Identity and Access Management Learn how to manage Identity Provider and Access in Cognite Data Fusion.This guide provides a structured overview of setting up your Identity Provider (IdP) and managing access for users, groups, applications, and service accounts within CDF.How to Debug and Monitor Cognite TransformationsLearn how to effectively debug and monitor your data transformations in Cognite Data Fusion (CDF) to ensure their reliability and accuracy.How to test Functions before deployingLearn different ways to test Cognite functions with this How-To guide.How to Manage Projects, Transformations, and Datasets in the Cognite Data Fusion UILearn how to manage projects, transformations, and datasets using the web-based user interface in Cognite Data Fusion (CDF).Browse the entire Cognite Academy catalog for these new resources and many more!Happy Learning! 

Related products:Academy Trainings

More flexible license handling for simulator connectors

Simulator license constraints can cause significant disruption when running engineering workflows, especially during high-frequency simulations or when available licenses are limited. Previously, simulator connectors only supported acquiring and releasing licenses for every simulation task.We've now added more flexible options that allow you to optimize license usage for different usage patterns and operational needs.License handling updatesThe new license management modes provide configurable strategies to adapt to different usage patterns:Per-task acquisition (default): Licenses are acquired and released for each simulation task.Inactivity-based release: Connectors hold licenses across multiple consecutive tasks, releasing them only after a configurable period of inactivity. Reduces overhead for batch operations while ensuring efficient license sharing during idle periods.Beyond the new license handling options, we've added reliability improvements that work across both strategies:Process cleanup: Automatic detection and termination of stuck simulator processes that fail to release licenses properly Retry logic: Retry pattern with exponential backoff when licenses are temporarily unavailableCurrent availabilityFlexible license handling is implemented in the PROSPER connector and will be extended to other simulator connectors in the future. Default behavior remains unchanged for backward compatibility, which means existing configurations continue working without modification.Configure the new license handling options in your connector configuration when you're ready to optimize your license utilization strategy. Read more about the available options in our public documentation.

Related products:Simulator Integration
Data Workflows UI is now Generally Available

Data Workflows UI is now Generally Available

Industrial operations generate massive volumes of data that must be processed, transformed, and contextualized to drive business value. Yet building reliable data pipelines through APIs alone or managing individual tasks in isolation can create operational complexity, especially as your data ecosystem grows.Data Workflows in Cognite Data Fusion addresses this challenge with a visual orchestration platform that's now generally available. Improve how you build, manage, and monitor your industrial data pipelines with an intuitive graphical interface designed for both technical and business users.Why Data Workflows changes the gameData Workflows is an orchestration service that lets you connect different CDF tasks together and run them in sequence with dependency management.While we're just getting started with this service, it already provides several key improvements over managing individual tasks:Task coordination: Handle complex sequences of tasks with dependencies, supporting up to 50 concurrent executions and 200 tasks per workflow.Enhanced reliability: Built-in retry logic, error handling, and failure recovery help reduce manual intervention when individual components encounter issues.Execution visibility: Track execution history, monitor task status, and debug failures with centralized logging and status monitoring.Mixed task types: Mix different task types and create dynamic, data-driven pipelines that can adapt to different data conditions and requirements.Current task orchestration capabilitiesData Workflows currently supports six task types that cover common industrial data operations, with plans to expand this list as we continue developing the service:CDF Transformations: Running SQL transformations to process and transform data with builtin retry policies, advanced concurrency control and credential management. Cognite Functions: Custom data processing logic, integration with external APIs and systems, complex calculations and analytics, data validation and quality checks with support for both synchronous and asynchronous execution patterns. Simulations: Engineering simulations for process optimization, predictive maintenance simulations to assess equipment health, what-if scenario analysis integrated directly into your data pipelines. CDF API Requests: Direct integration with any CDF API endpoint for low key resource management and data retrieval. Dynamic Tasks: Processing variable numbers of data sources, creating workflows based on runtime conditions, handling different processing paths based on data content, great for handling dynamic industrial scenarios. Subworkflows: Grouping related tasks for better organization, creating reusable task blocks, logical separation of workflow phases that enhance maintainability and reusability.Workflow automation with TriggersScheduled triggers: Run a data workflow at regular intervals using cron expressions for everything from hourly data updates to monthly reporting cycles.Data modeling triggers: Run a data workflow based on changes to data modeling instances matching a filter with support for batching controls that allow distributing load over multiple workflow executions.Advanced workflow capabilitiesParametric workflows: You can pass data between tasks and from the outside world into your workflows. Makes them reusable across different scenarios.Error handling: Mark tasks as required or optional. Optional tasks can fail without cancelling the whole workflow.Where to find itNavigate to Data Management > Integrate > Data Workflows to begin building your first automated pipeline, or explore our public documentation to discover current capabilities.

Related products:Data Workflows

Cognite Data Fusion - Q2 2025 Product Release

Dear community,We are excited to share highlights from the June 2025 release of Cognite Data Fusion. This post will walk you through:Orchestrate your data pipelines with Data Workflows in General Availability Interact with multiple AI agents across your business workflows Accelerate contextualization of 360° imagesThere’s also much more to explore and discover. Dive into the latest release notes and uncover all the additional features and improvements designed to take your industrial data journey to new heights. These brand new additions will be available to you on June 3rd, with some exceptions for Atlas AI which will come by end of June. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! Orchestrate your data pipelines with Data Workflows in General AvailabilityOnboarding of high quality and real-time industrial data to Cognite Data Fusion is essential for powering industrial applications and user experiences, ultimately enabling value capture. Building data pipelines through APIs alone or setting up individual tasks is error prone on larger scales, and can also create friction for less technical users.Data Workflows offers an intuitive, graphical interface for building and managing, in addition to tools to efficiently monitor and inspect workflow executions. Key capabilities include:Define tasks for Transformations, Functions, Simulations, other Cognite Data Fusion requests, and more to be executed. Create triggers for the workflow to run based on a schedule or events (e.g. new data in your data model). Manage workflow versions to handle changes and iterations. Monitor history of runs to understand status and execution details, allowing for efficient debugging.You can read more about Data Workflows in docs.cognite.com. Interact with multiple AI agents across your business workflows (available end of June)Atlas AI agents can help enable and accelerate your daily work, ranging from easier retrieval of industrial data to more advanced use cases such as equipment troubleshooting and root cause analysis. We are now launching an improved experience for interacting with Atlas AI agents, including:Option to switch between available AI agents in a conversation, without losing context. Auto-selection of the most recently used agent when you re-open Atlas AI. Persistence of conversation across user interfaces and agents, until a new session is started. Native support for interacting with Annotations, improving experience of working with contextualized P&IDs.You can read more about Atlas AI in docs.cognite.com.Accelerate contextualization of 360° images3D models in Cognite Data Fusion can provide additional high-value information through 360° images of the industrial facility. Up until now, users had to manually inspect uploaded 360° images to find equipment tags appearing as text, and contextualize each one to the right assets and equipment. This release will include automatic detection of text on equipment in 360° images, with a list of suggested image areas likely containing tags for contextualization. This will help reduce manual efforts and speed of contextualization of equipment tags in large image collections. You can read more about contextualization of 360° images in docs.cognite.com.

Related products:Product Releases

Cognite Data Fusion Bootcamp Full Calendar Is Now Live

Our full Cognite Data Fusion Bootcamp calendar is now available through December 2025 This 4-day, in-person training is your opportunity to build end-to-end solutions using Cognite Data Fusion (CDF) - covering data foundation, integration, modeling, and orchestration through hands-on, real-world scenarios.  Ready to skill up? Reserve your spot today for our bootcamps scheduled in Oslo and Houston.   Event  Date Location Cognite Data Fusion Bootcamp in Oslo - June 2025 June 16-19, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston - June 2025 June 23-26, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Houston - July 2025 July 14-17, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Aug 2025 August 11-14, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston - Aug 2025 August 18-21, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Sep 2025 September 8-11, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston - Sep 2025 September 22-25, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Oct 2025 October 13-16, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston -  Oct 2025 October 20-23, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Nov 2025 November 10-13, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston -  Nov 2025 November 17-20, 2025  Houston, TX, US Cognite Data Fusion Bootcamp in Oslo - Dec 2025 December  8-11, 2025  Oslo, Norway Cognite Data Fusion Bootcamp in Houston -  Dec 2025 December 15-18, 2025  Houston, TX, US Confirmed for the Bootcamp? Join the Bootcamp community!Congratulations on securing your spot in the Cognite Data Fusion Bootcamp! To stay informed and connected, be sure to join the official Cognite Data Fusion Bootcamp Group. Here, you’ll find everything you need to prepare: key updates, important resources, and a space to ask questions - before and during the bootcamp.

Related products:Academy Trainings