Skip to main content

Cookie Policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie Settings
  • 180 Product updates

Exciting Enhancements for 3D Workflows!

We’re thrilled to announce a set of updates designed to tackle the challenges of working in complex 3D environments and 360° images. Our latest features make navigation and contextualization easier than ever—and we’re introducing seamless collaboration in 3D! Introducing 3D Points of Interest Navigating complex 3D environments just got a whole lot easier! With our new 3D Points of Interest feature, you can: Mark Specific Locations: Capture coordinates, camera views, and selected assets for quick reference. Collaboration Made Simple: Share these points with your team to enhance context, streamline workflows, and boost productivity. Documentation Enhanced Navigation with 3D Ghosting and Image Navigation Remove visual clutter and navigate with ease using these powerful features: Ghosting for 3D Content: Declutter your view to focus on what matters most. Streamlined 360° Image Navigation : Seamlessly move between previously viewed images and adjust opacity for in-depth analysis. Toggleable Markers: Show or hide markers to reduce visual distractions. Documentation Unified 3D Contextualization We’re making it easier to link assets to their 3D representation from the 3D scene configuration in data management: Maintain Spatial Context: Initiate contextualization directly from the 3D scene without switching views, ensuring accuracy and reducing confusion. Documentation These updates are live and ready to power up your productivity! Dive in, explore, and let us know how these features transform your workflows. 💡 Need extra help? If you still find 3D hard to work with, we’ve got you covered! Check out our newly released course on how to use Search and 3D in our Academy. You can enroll here. What’s your favorite new feature? Drop a comment and tell us how you’re using it! Best,Ragnhild Byrkjeland, Product Manager

Related products:3D

Product Roadmap update - December 2024

The year of 2024 has seen a revolution in how industrial companies are starting to use Data and AI to increase efficiency and profitability. We see reports of drastic reductions in time spent looking for data across data sources, and automation of workflows with the use of AI technology together with the Industrial Knowledge Graph in Cognite Data Fusion. A great overview of what you can do already today was just released in our December 2024 Product Tour https://www.cognite.com/en/product-tour with customer Stories from Koch Ag & Energy Solutions, Idemitsu Kosan Inc, and Skagerak Energi. With such an exciting year coming to a close, we start to look forward into the roadmap of 2025 and what new things our customers can do tomorrow. We see increasing engagement with you on Cognite Hub, at our first user conference IMPACT 2024 and in daily conversations and interactions with customers and users in the field. These ideas and feedback are vital to create a relevant and inspirational roadmap that empowers you to do more with data and AI. Atlas AI The launch of Cognite Atlas AI is an incredible break-through for building and deploying AI Agents. These have shown to be force multipliers in improving and automating complex workflows within e.g. maintenance planning, troubleshooting, RCA and more. In 2025 the toolbox will become more versatile and powerful so the AI Agents can interact with data, workflows and simulators. This allows you to go from simple access to complex industrial data to operationalize use cases at scale. The AI Agents will also be even more versatile with API interfaces both for more advanced tools, and also to integrate with your own 3rd party apps. Connecting field and office Industry research shows that process engineers, field workers and maintenance teams spend up to 50% of their time searching for information. Leaving less time for inspections, maintenance reviews and general operations. Seamless collaboration between field and office with faster resolution times, proactive problem solving and driving higher productivity across your organization. Capture data and issues in the field, and tag your co-workers in the office with all the relevant data. You can pull this seamlessly and visually into Industrial Canvas, and recover from unplanned downtime faster with accelerated insights. The workflows we focus on are faster resolution time with collaborative and proactive troubleshooting, reduced shutdown time with higher quality Turnaround preparation and connecting the field and office with digital operator rounds. AI Agents helps you accelerate this process, like reviewing high priority work orders or measurement trends relevant to what you are troubleshooting. Rapid data onboarding and contextualization Data driven decisions and effective AI models require high quality and real-time data from diverse sources. Properly cleaned, contextualized and modeled correctly to power your enterprise. In 2025 you’ll see a step-change in the speed and visual overview when onboarding industrial data. Nimble enough to rapidly onboard and contextualize data for a unit or plant, yet powerful enough to orchestrate enterprise scale global operations over time. Stay tuned here on Cognite Hub where we’ll be sharing more on these capabilities as they are released and available for you. If you want to follow our development and shaping of these and many other features of Cognite Data Fusion, use Cognite Hub to engage with us, give feedback and product ideas and stay up to date with the latest developments. Cognite roadmaps are forward-looking and subject to change.

Related products:Product Roadmap
featured-image

Quarterly Training Updates on Cognite Academy

We are excited to share our quarterly training updates on Cognite Academy with you! This quarter, we've introduced exciting new courses, enhanced learning experiences, and valuable updates to support our growing community of customers and partners. Here's a quick look at our latest enablement content! Cognite Data Workflows Learn to automate and manage data processes within Cognite Data Fusion, effortlessly gaining control and visibility over data pipelines. With videos, hands-on exercises, and step-by-step guides, you'll quickly master setting up and running efficient data workflows. Cognite Maintain Practitioner Cognite Maintain is built to optimize maintenance workflows, reduce downtime, and boost efficiency. This course teaches you how to leverage data insights to prioritize tasks, plan effectively, and make informed decisions. Through hands-on tutorials, you'll gain practical skills to enhance maintenance with Cognite Maintain's powerful tools. Microlearning Library Launch Dive into key concepts with our short, focused modules that easily fit into your busy life. Whether you're picking up new skills or brushing up on old ones, microlearning gives you the freedom to learn at your own speed, one step at a time. These bite-sized sessions make staying up-to-date easier than ever. Check out our library with 14+ micro-courses on different product areas. Working with Search and Browse 3D Cognite Search and Browse 3D is an innovative tool designed for industrial experts. Whether you are a process engineer, production specialist, or reliability expert, this course is your go-to resource for seamlessly exploring, analyzing, and visualizing industrial data. Cognite InRobot Practitioner Update Discover how to harness the power of InRobot to automate industrial missions with ease. This course comprehensively introduces the InRobot application and its capabilities, empowering you to optimize your operations using mobile robots like Spot. Cognite Infield Practitioner Update Cognite InField enables field technicians and operators to efficiently perform and plan their tasks. In this course, you'll learn to use Cognite InField to explore data (search and visualize), execute tasks (checklists), and plan fieldwork (templates and overview). Bootcamp: 4-Day Intensive In-Person Training Join our 4-day intensive Bootcamp to develop an end-to-end production-ready solution with Cognite Data Fusion. This hands-on training will guide you through the entire process, from data ingestion to deploying a fully functional solution. You'll gain practical experience, learn best practices, and collaborate with peers to solve real-world challenges. Browse the entire Cognite Academy catalog for more courses.

Related products:Academy Trainings
featured-image

Product Release Spotlight - December 2024 Release

We’re thrilled for the December 2024 release of Cognite Data Fusion, and we think you will be too! Check out our spotlight video about our Launch of Atlas AI Below you can find a range of new functionality released across Cognite Data Fusion. We’ll break it down into three sections, with details on the 20+ exciting new features we’re bringing around Atlas AI, Industrial Tools, and Data Management. These enhancements will be made available on December 3rd, 2024. You can also find more detailed information in our release notes on our Documentation portal. We're eager to collaborate with you in Cognite Hub as you begin to test and leverage these features. And as always, we’re interested to hear what you would like to see next by logging them as Product Ideas. Atlas AI Agent Builder Agent Interaction Language Model Library & Benchmarking Document question answering Industrial Tools Search & Data Exploration Industrial Canvas Data Management Connectors Simulator Integration Contextualization Data Workflows Transformations & RAW 3D Extractors Atlas AI Our vision for Cognite Data Fusion is to enhance industry efficiency through autonomous operations, with AI playing a crucial role. We see the future for our industrial users in three areas supported by Cognite Data Fusion: providing simple access to complex industrial data, enabling scalable use cases with out-of-the-box and tailored solutions, and leveraging AI agents to automate workflows. Inline with this, we’re excited to announce the release of Cognite Atlas AI, an industrial agent workbench that extends Cognite Data Fusion. It simplifies AI agent building with a low-code interface and ensures high-quality agents through integration with the industrial knowledge graph, minimizing data inaccuracies. Some of the key features include access to various Generative AI models, insights for model selection, and tools for task-solving. AI agents built with Cognite Atlas AI can be used directly in Cognite Data Fusion, utilizing tools like Industrial Canvas and Charts. We’re thrilled for all our customers and our early adopters to begin leveraging the range of features you’ll find more details on below! Agent Builder Early Adopter Building AI agents for industrial applications can be complex, requiring deep AI and data science expertise. To simplify this, we're introducing a Low-code Agent Builder Experience in the new Atlas AI workspace. This feature offers ready-to-use Language Models, specialized Agent Tools for interacting with your Industrial Knowledge Graph, and Agent Templates to jumpstart development. These tools reduce the need for specialized expertise, enabling faster development cycles and empowering more users to create and manage AI agents efficiently. Documentation Agent Interaction Early Adopter Developing user interfaces for AI agents can be costly as their variety and number grow. Cognite accelerates this by enabling users to find, start, and interact with AI agents directly within Industrial Tools like Industrial Canvas and Charts, or through an agent library. This integration allows seamless data handling and analysis without needing separate interfaces, enhancing interaction with AI agents using natural language directly in Cognite Data Fusion. Documentation Language Model Library & Benchmarking Early Adopter The rapid release of new GenAI Language Models and the deprecation of older ones make it challenging to select the best fit for AI agents. To streamline this process, Cognite Data Fusion now offers direct access to GenAI Language Models from various providers, complete with benchmark results tailored to industrial use cases. This feature eliminates the need for third-party subscriptions and integrations, providing easy access to top-tier models and significantly reducing the time spent on selecting the optimal Language Model for AI agent development. Documentation Document question answering Users often spend excessive time searching for insights within unstructured data formats like documents. To address this, we are bringing our Document Question Answering, Document Summarization, and Semantic Search capabilities to General Availability. These features are accessible for single and multiple documents in Search, Industrial Canvas, and Infield, with tooling available for Atlas AI agents. This enhancement reduces the time needed to retrieve insights and improves the quality of answers by combining relevant information from multiple documents. Co-pilots for documents Semantic search developer documentation Document question answering developer documentation Document summary developer documentation Industrial Tools In addition to our launch of Atlas AI, we are also enhancing many of our existing Industrial Tools, especially around Search and Industrial Canvas. Search & Data Exploration The industrial search Subject Matter Experts (SMEs) need a straightforward tool for exploring asset-centric and knowledge graph data. To simplify this, Search is now the sole exploration tool in the Industrial tools workspace, featuring document categorization, search and filter in Tree view, file download, and viewing indirectly linked data. This change enhances the user experience by eliminating tool selection confusion, while the Data Explorer remains available in the Data Management workspace for specialized use. Documentation 3D Points of Interest Navigating complex 3D environments can hinder collaboration and context-sharing. To address this, users can now mark, comment on, and share specific locations in 3D scenes, capturing details like coordinates and camera views. This feature enhances discoverability, strengthens collaboration with shareable points of interest, and boosts productivity by reducing search time and streamlining team workflows. Documentation 3D Ghosting and Image Navigation Navigating complex 3D scenes and 360 images is challenging due to visual clutter and limited context. New features like ghosting for 3D content, contextualized 360 images, navigation between previously viewed 360s, and toggleable markers improve clarity and discoverability. These enhancements reduce clutter, provide better access to relevant data, and boost efficiency by simplifying navigation and focusing on key areas. Documentation Industrial Canvas Improved user experience when troubleshooting and conducting RCAs We’ve made a range of Canvas enhancements including: displaying specific document pages directly, toggling time series data views, supporting clickable URLs, and customizable PDF exports. Performance improvements have cut document loading times from 5 minutes to 20-30 seconds. These updates expand use cases like work package preparation and make RCA and troubleshooting processes up to three times faster. Documentation Data Management And of course, to enable all of the experiences above, we’re continuously innovating on the foundational features which enable you to ingest, contextualize, simulate, and govern your data in Cognite Data Fusion. Connectors Enhanced PowerBI integration Beta Power BI users face issues accessing CDF data, causing potential performance issues and limited flexibility. New features include Cognite Authentication Service support, REST API functions (GET and POST) for accessing CDF resources, and GraphQL for efficient queries. These updates offer up to 10x faster data retrieval, providing greater flexibility and scalability for both basic and advanced users. Documentation Simulator Integration API-Based simulation workflows Industrial organizations often miss insights by not integrating simulators with plant data due to complexity and cost. Our solution offers a vendor-agnostic service with out-of-the-box connectors and a Simulators API for easy integration. This enables real-time performance optimization, predictive maintenance, and what-if analysis, reducing complexity and speeding up time-to-value for simulator-based workflows. Documentation Orchestrating simulator workflows Engineers and data developers struggle with automating simulation workflows that integrate operational data and post-processing. Our solution integrates CDF-based simulations with Data Workflows, allowing for flexible input/output handling and processing of simulation data. This enables automated, repeatable workflows, reducing manual intervention and simplifying data transfer between CDF resources and simulations. Documentation Contextualization Detailed Diagram Parsing Beta Engineering diagrams are crucial for SMEs in tasks like maintenance planning and root cause analysis, but current processes are manual. Our new feature detects symbols, tags, lines, and connections in vectorized diagrams to construct a knowledge graph. This enables use cases such as automated isolation planning, data validation by comparing P&IDs with other sources, and natural language search capabilities that leverage LLMs and the knowledge graph. Documentation Parsing rasterized engineering diagrams with Data Model support Beta The Diagram Parsing application, formerly Interactive Engineering Diagram, detects tags in diagrams and links them to assets and files. Previously limited to asset-centric resources, the new version supports Data Modeling data, enabling non-coding users to create interactive diagrams directly through the UI. This update consolidates diagram parsing capabilities into a single, more accessible application. Documentation Data Workflows Event-based triggers and access scoping Beta Fixed-schedule data processing jobs can be inefficient and wasteful. To improve this, we offer Data Modeling event triggers with batching, starting workflows based on changes in Data Modeling (DM) instances that match user-defined filters. We’re also bringing access scoping using data sets to production. These features enhance efficiency and scalability with event-based triggers, enabling effective use of Data Workflows in production and cross-team settings. Documentation Transformations & RAW File content support in transformations Transformations needing full table reads, such as those with complex joins, face performance issues with Raw tables optimized for record reads. To address this, transformations now support reading from CDF Files in NDJSON format, offering an alternative for staging data when incremental loading isn't possible. This enhancement improves performance and scalability for transformations requiring full table reads. Documentation 3D Unified 3D contextualization Contextualizing assets from isolated 360 images or point clouds can be difficult, as users often struggle with spatial relationships. To address this, users can now initiate contextualization directly from the 3D scene, maintaining spatial context and reducing confusion. This feature streamlines workflows by eliminating the need to interpret point clouds or navigate images separately, enhancing clarity and ensuring precise linking of assets to their real-world locations. Documentation 360 image contextualization with support for Data Modeling Previously, users struggled to contextualize tags in 360 images, limiting workflow integration. Now, Tag Detection Suggestions automatically identify potential tags, and Manual Contextualization Tools allow users to refine or create links between equipment and tags. Fully supported in Data Modeling, these features save time and ensure precise asset tagging in 360 images. Documentation Extractors Hosted extractors for REST APIs and Azure Event Hub On-premise, schedule-based extractors can burden local IT teams with maintenance and management. Hosted extractors alleviate this by shifting responsibilities to Cognite, enhancing data flow from source systems to CDF. With this release we’re launching our hosted REST API extractor and an Azure Event Hubs extractor. This setup streamlines data flow from RESTful API interfaces and simplifies Azure Event Hubs-based Kafka configurations, reducing management costs for data onboarding to CDF. Event Hub extractor Documentation REST extractor Documentation Enhanced SAP integration We are launching multiple updates to better support the connection of SAP to Cognite Data Fusion. SAP Writeback: Support for Notifications & any Attachments - i.e. Documents from SAP BW SAP extractor: Use Odata V2 or OData v4 SAP S/4HANA endpoints Together, these eliminate human error resulting from missing data updates in SAP system and improve performance, reduce system resource consumption, and support better SAP business logic integration (OData V4 support). Writeback of SAP Notifications and Attachments Setting up the SAP extractor - OData setup

Related products:Product ReleasesPublic BetaIndustrial CanvasSearch and Data ExplorationAPI and SDKsData WorkflowsTransformations and RAW3DContextualization

New Release of Cognite Toolkit (v0.3.0)

The main theme of this release is Quick Start, where a faster and easier onboarding to the Cognite Data Fusion using the Toolkit. A few of the highlights: Support for new resources: HostedExtractors, CogniteFile, Sequence, Robotics, 3DModel, Asset, and LocationFilter New command cdf modules. This has the subcommand init, add , list and upgrade, and replaces the former cdf-tk init command. The cdf modules init gives you a guided experience to select new modules. The cdf modules add enables you to your existing setup. The cdf modules upgrade automatically upgrades your modules to match the Toolkit CLI version. The cdf modules list enables you to get an overview over the modules you have. The cdf auth has been reworked into cdf auth init and cdf auth verify. This gives you a more guided experience to setup the required service principal required to run Toolkit. Device login - easier and more secure replacement for interactive login. Toolkit is now also released as a Docker Image: docker hub. New command cdf repo with currently one subcommand cdf repo init. This creates default GitHub workflows for running Toolkit in GitHub Actions using the new Docker image. Plugins. Toolkit now has three plugins run, pull, and dump to help with associated tasks to governance such as debugging Cognite Function locally, pull resources developed in the CDF UI, and dump ungoverned resources. Change: We have replaced _system.yaml with cdf.toml. This is to enable easier configuration of Toolkit, such that you can set default values, easily enable plugins and feature flags for your project. A few new modules as well InRobot and Bootcamp. In addition, Toolkit now comes with a ready to extend version of the Core Model!

Related products:Toolkit

New Course on Cognite Maintain

We're excited to announce the launch of the Cognite Maintain course, which is now live on Cognite Academy! 🎉 Cognite Maintain is an application designed to optimize turnaround workflows, reduce downtime, and enhance operational efficiency. It enables teams to prioritize maintenance activities based on data insights, streamline maintenance planning, and ensure that assets are managed effectively to reduce costs and improve reliability. Using Cognite Maintain, you'll be able to leverage data to drive smarter maintenance decisions, ensuring that your operations are always running smoothly. After completing this course, you will be able to: Understand the concept of Cognite Maintain: Gain a comprehensive understanding of what Cognite Maintain is, including its core principles, purpose, and the value it brings to maintenance planning and optimization. Identify when and why to use Cognite Maintain: Learn the scenarios where Cognite Maintain is most beneficial and discover how it enhances maintenance efficiency and decision-making. Leverage key features and functionalities: Use Cognite Maintain's advanced capabilities to create, refine, and analyze maintenance plans, ensuring optimal resource allocation and performance. Review and visualize work orders through multiple perspectives: Explore work orders using different visual formats, including 2D and 3D models and Gantt charts, to gain a deeper understanding and holistic view of maintenance activities. Compare and evaluate activities across various digital scopes: Perform cross-comparisons of activities from different digital scopes to identify gaps, overlaps, and areas for improvement in your maintenance strategy. Conduct in-depth plans analysis: Execute detailed analyses of your maintenance plans to assess effectiveness, uncover insights, and support continuous improvement in your maintenance workflow. This course combines video tutorials, and step-by-step guides to equip you with the skills to efficiently manage turnaround maintenance workflows using Cognite Maintain. Enroll in the Cognite Maintain learning path today and earn your certification, showcasing your expertise in optimizing maintenance workflows with Cognite's powerful application. Take the next step in advancing your skills and boosting your operational impact! Do you have any training needs or ideas? Please share them in the Product Ideas section under the Academy Trainings tab. Start your learning journey today and register for the Cognite Maintain course on Cognite Academy.

Related products:Academy Trainings

New Course on Cognite Data Workflows!

We’re excited to announce the launch of the Cognite Data Workflows course, now live on Cognite Academy! 🎉 Data Workflows is an integrated, managed orchestration service within Cognite Data Fusion, designed to automate and streamline data processes. It triggers tasks at the right time, keeps data up-to-date, and ensures that processes are smoothly managed by handling dependencies between them. With Data Workflows, you’ll experience less hassle managing data pipelines, enjoy improved performance, and gain better visibility and control over your data operations. This course offers a deep dive into: Core concepts of Data Workflows: Master the fundamental principles and components of orchestration within CDF. Task types: Learn about the various tasks, including CDF Transformations, Cognite Functions, and dynamic tasks. Creating, running, and scheduling workflows: Hands-on practice in designing workflows that automate and optimize your data processes. With a mix of video tutorials, practical exercises, and step-by-step guides, this course will equip you with the skills to efficiently manage data workflows within Cognite Data Fusion. Ready to level up your skills? Dive into the course now and start streamlining your data processes! Have any training needs or ideas? Let us know by sharing it in Product Updates under Academy Trainings tab. Start your learning journey today and register for the Cognite Data Workflows course on Cognite Academy .

Related products:Academy Trainings

Product Update: Triggers and Nested Subworkflows for Data Workflows

We're excited to announce two powerful new features for Data Workflows that will make automating and managing data pipelines easier: Triggers for Data Workflows and Nested Subworkflows. 1. Triggers for Data Workflows Automating workflows has been tricky, requiring custom workarounds that make the process time-consuming. Understanding and managing the triggers behind each workflow can be difficult as well. With our new Triggers feature, you can now: Natively define schedule-based (cron) triggers within Data Workflows. Configure input parameters directly within each trigger. Access detailed run history for all triggers to gain visibility into workflow execution. This update simplifies automation by embedding trigger definitions directly into workflows, and is designed to evolve to support more complex trigger types in upcoming releases. How to Get Started API Documentation: Triggers API Python SDK: Create Triggers User Guide: Triggers in Data Workflows 2. Nested Subworkflows Large workflows with many steps often become difficult to manage, and it’s challenging to reuse common steps between different workflows. We've introduced Nested Subworkflows, that: Allows you to reference another workflow definition in the subworkflow task type. Automatically embed this referenced workflow into the main workflow at runtime. This can be illustrated by an example Workflow A and Workflow B. In the definition of Workflow A, a task of type subworkflow contains a reference to the definiton of Workflow B, which is completely separate from Workflow A. When executing Workflow A, the definition of Workflow B will be dynamically loaded into and executed as a subworkflow inside workflow A (illustrated below). If Workflow B is executed directly, Workflow A is not impacted. This enhancement simplifies the management of complex workflows by breaking them into smaller, manageable pieces. It also encourages the reuse of common steps across multiple workflows, improving efficiency and reducing redundancy. How to Get Started Documentation: Subworkflow Task API Documentation: Subworkflow Task Both of these updates are designed to make your data workflows more efficient, scalable, and easier to maintain. We’re excited to see how you’ll use these new capabilities to power your data solutions! Let us know in the comments if you have any feedback or questions!

featured-image

Product Release Spotlight - September 2024 Release

Dear Community! We are excited to announce the September 2024 release of Cognite Data Fusion! This post covers some significant highlights and will walk you through selected functionality. Cognite core data model enablement Rapidly onboard to Cognite data models Search across the same data in all applications Express the data in your own way with the tools you already love Industrial tools Filtering and navigation Search defaults and improvements Industrial Canvas Maintain Data Operations Data Workflows Time Series 3D Extractors Simulators and connectors These enhancements will be made available on September 17th, 2024. You can also find more detailed information in our release notes on our Documentation portal. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for their contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. Please share your Product Ideas here in the community. This helps us understand how we can keep improving Cognite Data Fusion to fit your needs. Let's keep this momentum going! Here’s a short video summarizing the exciting updates Data foundation This September, our feature updates come with a unifying theme: Data Foundation. But what exactly does that mean, and why is it important now? As the name suggests, Data Foundation is about creating a solid base to build on and expand upon. The key reasons to invest in this foundation include: Accelerating onboarding: By streamlining the onboarding process, we reduce the time it takes for users to start seeing value. Laying the groundwork for future success: A strong foundation is crucial for success in areas like Atlas AI, digital operator rounds, turnarounds, and much more to come. Enhancing user experience: Aligning our entire tech stack around these foundational features ensures a seamless and consistent user experience. Cognite core data model enablement We've simplified our approach to data modeling and aligned all our technologies around it to enhance your experience. We’re introducing out of the box Cognite data models, which streamline industrial data management by integrating a standardized core model with industry-specific extensions and customizable solutions. Starting this September, you’ll have access to the Cognite core data model and the initial version of the Process Industry data model — the first in a series tailored to meet specific industry needs. These models serve as building blocks to solve your unique challenges without data duplication, offering guaranteed functionality without the need for additional configurations. Everything just works! In this release, we’ve achieved a lot‌ — ‌over 30 new features are coming your way, many of them tied to the Data Foundation theme. All our applications and underlying services now support the Cognite core data model. Read more about it here Rapidly onboard to Cognite data models You can now onboard data from source systems directly into Cognite data models using our Cognite File, OPCUA, PI, and SAP_OData extractors. This process is fully supported, with seamless integration into the Cognite core data model, allowing for the transformation of data, creation of time series, files, and annotations. With this release, you can also access, search, and contextualize 3D data together with other Cognite data within a unified model. This enhances insights, boosts efficiency, and enables better decision-making. Search across the same data in all applications You can search and find the same data from your desktop in the office or your mobile device while in the field. Enhanced search functionality uses the Cognite core data model to deliver more optimized results. Customer-defined data models can also be used and users can select and arrange columns and customize views. Express the data in your own way with the tools you already love We’re not stopping here‌ — ‌we know you love the tools you already use, so we’ve made sure they benefit from the latest updates. With this release, you can leverage powerful no-code troubleshooting and root cause analysis in Charts, using time series and activities natively from the Cognite core model. Additionally, you can now search and explore data seamlessly across both the Cognite Core model and existing models in Industrial Canvas. In short, this update provides a more intuitive and uniform search experience across all CDF applications. Industrial tools Filtering and navigation Filtering: In addition to location filters in Search, Canvas, and Charts, you can select specific data models . Choose between predefined location filters and specific data models for a flexible user experience that meets different needs for different users. Unified navigation: Users have faced challenges when navigating through the CDF platform, leading to confusion and inefficiencies. In response to this feedback, we're thrilled to introduce a unified top bar with breadcrumbs. This new feature is designed to streamline navigation across all applications and services within the platform. Search defaults and improvements Default exploration tool: To enhance accessibility and usability, we are making Search the default search engine for our industrial users. Users will still have the option to switch back to Data Explorer easily by clicking the “Switch to Data Explorer” option. By making Search the new default search experience, we bring the Cognite core data model closer to end-users, ensuring a more streamlined experience when exploring data. Hierarchy view: We are excited to announce a significant enhancement to the search functionality that addresses navigation challenges within the asset hierarchy! To improve the user experience, we are rolling out a hierarchical view for asset-centric data model assets. This new feature lets users look at assets in a clear and organized tree-view format. They can quickly browse and understand relationships between assets, which makes data handling more efficient. Slicing in x, y, and z direction in 3D: Many users have had trouble when trying to look at specific parts of 3D models. The inability to precisely crop and slice planes has led to inefficiencies and limited insights, making it challenging to focus on the areas that matter most. To address these issues, we are introducing new tools for cropping and slicing 3D models along the X, Y, and Z planes. This will enable users to conduct detailed examinations of specific 3D sections, enhancing their understanding of complex models and reducing the time required to isolate areas of interest, allowing for faster analyses. Industrial Canvas Dedicated cause map diagram (Beta): Our users have expressed that they are spending more time managing tools and data, rather than focusing on the root causes of issues and analyzing them. This limits their ability to derive insights and slows down their time to value. To address this challenge, we are excited to introduce the ability to easily create cause maps directly in Industrial Canvas, where all necessary data is readily available as evidence for approving or rejecting a hypothesis. This feature includes auto-alignment when creating cause maps and built-in status indicators to streamline the process even further . Rule based coloring (Beta): This release also includes a significant enhancement that will empower reliability engineers to better monitor the health of their plants and critical systems. Currently, this process involves checking each piece of equipment individually, which can be both time-consuming and cumbersome. To streamline this process, Canvas is introducing support for color coding assets and pipes on diagrams. This feature will categorize equipment based on several vital rules, such as corrosion rate, operating temperature, and more, providing a clear and immediate visual representation of conditions. AI-improved retrieval for multiple documents in Industrial Canvas and search (Beta): It's time-consuming for industrial users to identify relevant documents and search for specific data within each document. To make this easier, we're introducing the ability to ask Copilot questions, which will search up to 100 documents simultaneously. This will allow users to use insights across multiple documents and get relevant information quickly and effectively. Maintain Budget awareness in turnaround scoping: Managing the cost impact of turnaround scopes can often be complex, especially as the scope evolves. By enhancing cost awareness throughout the scoping process, organizations can significantly improve the return on investment (ROI) for their shutdown projects. Users now get supported with automatically calculated costs based on work orders and resource estimates within a planned scope. This allows for real-time cost assessment as changes occur. Also, Maintain now supports the evaluation of different scope scenarios against their associated costs. This enables teams to make informed decisions and assess the financial impact of various scoping options. Data Operations Data Workflows Scheduled-based triggers: Automating data workflows has often required cumbersome custom workarounds, leading to time-consuming implementation processes. Additionally, understanding the triggers behind each workflow could be complex and unclear. To streamline this, we are introducing the ability to natively define schedule-based triggers within your data workflows. This lets you include input parameters directly within trigger configurations and access a detailed run history for each trigger. Built-in triggers ‌simplify the automation of data workflows, reducing complexity and implementation time. Time Series Improved data point queries: Users want better ways to get data points from time zones other than UTC. They also want to be able to specify calendar months in their queries. Previously, answering queries such as “Give me the average values for the month of February 2022, in the India time zone” was cumbersome and complicated. To facilitate easier data access, we are enabling users to retrieve data points based on a named time zone, or using specified time offsets, directly from the Time Series API. More details can be found in the API guide. Also, we've added monthly granularity support for retrieving aggregates. The users can now specify the aggregate bucket size in months, and the API will automatically return the aggregates they want based on the start and end date they specified For more information on granularity, please refer to the API guide. 3D Improved experience in setting up 3D content: Users have faced challenges with the current 3D content upload and configuration process. This has led to prolonged time to value and limited scalability for projects relying on 3D content. To address these issues, we have revamped the 3D sub-application to provide a simplified experience. This enhancement streamlines the upload and configuration process, making it easier and more intuitive. Extractors Hosted extractors for MQTT and Kafka: Traditionally, Cognite extractors have only been deployed on-premise, and operate on a schedule. This deployment and management model has represented significant management challenges for local IT teams. The maintenance and management of these extractors can be burdensome, consuming valuable resources and time. In this release, hosted extractors begin to address these problems. Cognite is now taking on the burden of maintenance, updates, and management from local IT teams. This will significantly improve the flow of data from source systems to Cognite Data Fusion. Hosted extractors for REST, and Azure Event Hub (Beta): Extractors can pose a considerable management challenge, particularly for local IT teams, leading to resource allocation and management issues. To address this, we are introducing hosted extractors. This transfers the responsibilities of maintenance, updates, and management from local teams to specialists at Cognite. The cloud hosted REST extractor facilitates data extraction from source systems behind a REST API, enhancing accessibility and integration.The cloud hosted Azure Event Hub extractor streamlines data extraction processes from Azure Event Hub, providing efficient data management and flow. Write back to SP source system (Beta): Users managing maintenance operations through Cognite InField have faced challenges due to the existence of two “sources of truth” for specific SAP data. This requires two updates after field maintenance operations, one in Cognite InField and one in the SAP source system. This leads to inefficiencies and potential discrepancies. We're excited to announce the implementation of a bidirectional data flow that enables seamless synchronization of SAP notifications data between SAP S4HANA and Cognite Data Fusion. Now, when Cognite applications update SAP notifications, these changes will automatically reflect in both systems. We believe this enhancement will streamline workflows, improve data accuracy, and improve the overall user experience for maintenance operations. Simulators and connectors Simulator integration service (Beta): Industrial organizations often miss opportunities to generate valuable insights by combining simulators with plant data. The traditional process of integrating these powerful tools is usually expensive and time-consuming, which leads to underutilization of simulators in production applications. We're proud to introduce a vendor-agnostic service that supports integration with simulators from multiple vendors. This includes out-of-the-box connectors to industry-standard simulator tools, ensuring streamlined integration processes, as well as Simulators API that allows for programmatic interaction and orchestration of simulator resources. This integration enables real-time asset performance optimization, predictive maintenance, and what-if scenario analysis, driving significant operational improvements. Gap connector (Beta): Oil & Gas operators have faced challenges in integrating the PETEX GAP simulator with their ET, OT, and IT data. This limitation hinders their ability to leverage GAP's sophisticated multiphase network optimization capabilities and build process twins in cloud-based, real-time applications. Cognite introduces a native GAP connector within Cognite Data Fusion. This enhancement allows for seamless integration of the GAP simulator, alongside efficient handling of GAP and PROSPER dependencies. This integration will significantly reduce both the time and costs associated with building digital Process Twins using Cognite Data Fusion, streamlining your workflows. Development of new simulator connectors (Beta): Developing simulator connectors for different vendors and disciplines has historically been challenging, time-consuming, and costly. This limitation has limited customers from fully using their preferred simulation tools within Cognite Data Fusion. To address these challenges, we are introducing the Connector Developer Toolkit, which includes: Detailed Documentation: Comprehensive guidance for creating new connectors tailored to your specific needs. Library of Reusable Logic: A collection of reusable connector logic and building blocks to streamline development. Open-Source Example: Access to a functional example of a connector, aiding developers in understanding best practices and implementation.

Related products:InFieldProduct ReleasesMaintainIndustrial CanvasChartsSearch and Data ExplorationAPI and SDKsExtractorsData WorkflowsTransformations and RAW3DContextualizationData Modeling

Time Series New Features - Time Zone and Monthly Aggregrates

Dear Cognite Data Fusion Users, We are delighted to announce the general availability of the support for time zones and monthly aggregates API feature for Time Series. This feature allows API users to retrieve time series aggregated data using: A named time zone, using time zone names from the IANA time zone database A specific time offset in 15 minute increments Monthly time granularity, in addition to the existing Second, Minute Hour and Day granularities Time Zones When using a named time zone to retrieve the data, the API applies the appropriate offset to the aggregated values of the data points in order to return the correct selection of data. The feature works for all named time zones, including those with a 15 minute offset such as Nepal. Aggregations by Month When retrieving data using the new month granularity, users can specify an aggregate bucket size of any useful number of whole months. When retrieving aggregated data, the API will automatically return aggregated values for the month(s) contained within the query, and will automatically account for the number of days in the month, including leap years, and including daylight savings time adjustments. The feature is available on both Time Series and Synthetic Time Series queries, and support is implemented in both the Python SDK and Cognite Data Source for Grafana. For more details on how to use the new features, please consult our Developer Guide and API specification.

Related products:API and SDKs

Gain Deeper Data Insights with New Inspection end-points for Views and Containers

We're excited to announce the release of powerful new features in our APIs that empower you to gain deeper insights into your data and simplify container management. With these new tools, you can: See how different parts of your schema connect. Find out exactly what data is in your instances. Get a clear picture of your data structure. Make informed decisions when deleting data, knowing the full impact. Easier Container Management (DMS API only): Identify Linked Views: Effortlessly locate all views associated with a particular container. Comprehensive View Count: Gain an accurate total of all views connected to a container, including those potentially outside your access. Inspecting Instances (DMS API only): Pinpoint Data Containers: Quickly identify which containers store data for a specific instance. Mapped View Clarity: Easily see the views that map the containers housing your instance data. These innovative features empower you to: Strengthen Data Understanding: Gain deeper knowledge about your data structure and relationships for improved decision-making. Boost Efficiency: Save time and effort with intuitive inspection tools that provide a clear view of your data. Get started exploring your data like never before! We encourage you to review the updated documentation for detailed instructions on leveraging these exciting new functionalities.

Related products:Data Modeling

Charts New Features - June 2024 Release

Hei Everyone! We're excited to announce the latest updates to Charts, designed to enhance your data visualization and analysis experience. In this release, we've focused on improving data viewing capabilities, introducing new features for performance monitoring, and maturing the monitoring and alerting capabilities in Charts. These enhancements are aimed at providing users with greater flexibility, control, and confidence in leveraging Charts for their data-driven decisions. 1. Time Series Status Codes in Charts We’ve enhanced the Charts time series viewer to display not only good data points but also bad and uncertain data. Now, uncertain data points are shown with grey shading, while bad data points are shown as gaps. This improvement enhances trust in our product and allows for more accurate data representation. Before: If you look at this chart carefully you will notice lines being drawn between gaps leading to an impression that the data is being interpolated. This lead to an inaccurate representation of the data from the source systems in Charts. After: If you look at this chart carefully you will now notice some visual improvements on the time series viewer. We now show uncertain data points with a grey shading while the bad data is represented as a gap. This leads to a much more accurate representation of the data as seen in the source systems leading to more trust in Charts and a more reliable RCA / troubleshooting job. Note: To enable these indicators, please make sure to contact the relevant CDF individual to help you get this data from your source systems into CDF by updating the extractors. 2. Monitoring and Persisted Calculations in GA Both monitoring and persisted calculations have matured from Beta to production.These capabilities are crucial for the troubleshooting/RCA process, and we’re excited about the positive impact on our customer base. Learn more: Monitoring Documentation Persisted Calculations Documentation 3. Live Mode in Charts We’re delighted to introduce Live Mode in Charts! You’ll now see a heartbeat icon on the top side of the Chart bar. Toggling this setting enables auto-refresh, so you can see new data points in CDF automatically without having to move around or click. Bug Fixes: Fixed persisted calculations preview not working: The preview functionality for scheduling calculations now works correctly. Fixed an issue causing charts to crash when zooming out: Charts no longer crash due to a stack trace issue when zooming out. Plus, we’ve resolved around 35 backend and frontend bugs to improve performance and user experience. Thank you for your continued support and feedback. We hope you enjoy these new features and improvements. As always, your feedback is invaluable to us. Happy Charting! The Charts Team

Related products:Charts
featured-image

New navigation update - June 2024 release

We're excited to announce some significant updates to our platform's navigation and the introduction of Workspaces! Enhanced Navigation: We’ve evolved our navigation from the traditional top bar to a more intuitive and accessible sidebar. This move is designed to make it easier for you to find what you need, when you need it - with fewer clicks and a cleaner look. New side bar navigation Introducing Workspaces Recognizing the diverse needs of our IT users and industrial end users, we´ll rolle out Workspaces. Workspaces are designed to surface the tools and data most relevant to you, reducing complexity and enhancing productivity. Industrial tools workspace: Created with operational teams in mind, this space gathers our industrial tools in one central location, bringing data insights and collaboration tools to the forefront. Data management workspace: Optimized for administrators and IT professionals, this workspace streamlines access to integration, contextualization, validation and system health insights. Admin workspace : Exclusive to users handling access management, this workspace is where you manage permissions and grant users access to necessary resources within CDF. How does it work? You can easily toggle between workspaces with the new sidebar navigation. We’ll also guide you through this process when you log in to CDF for the first time after the release. Additionally, you can collapse the sidebar by clicking the two arrows in the top-right corner. This feature is particularly useful when working within our tools. How does it work? These updates are part of our ongoing commitment to improving your experience and making Cognite the most user-friendly and efficient platform for all your industrial data needs. We can't wait for you to dive into the new experience. We're looking forward to hearing your thoughts on these enhancements.

Related products:Product Releases

Streamlit Low-code Applications - June 2024 release

Hello community! As you may have noticed in the post on the upcoming June product release , we are releasing a new beta feature enabling you to build and deploy low-code data applications using Streamlit🚀 It is often a tedious task to create and share data applications, including even simple dashboards. Considerations such as hosting infrastructure, approvals with IT teams, and availability of data are often blockers for progress. Our new feature allows you to build low-code applications in Python, leveraging the Streamlit framework, and deploying the applications to users instantly to access them. Below you will find a walkthrough of the new experiences for both application builders and consumers. You can also find more information in our documentation . We hope you find this feature as exciting and valuable as we do. As always, we are looking forward to hearing your feedback 😄 Building & deploying applications - Data Management workspace To build and deploy Streamlit apps, you need to navigate to the “Data Management” workspace, expand “Explore”, and select “Streamlit apps”. After clicking on “Streamlit apps”, you will be taken to an overview of existing Streamlit applications. Here you will be able to see all applications created within your Cognite Data Fusion project, and filter the applications already published for use, in addition to applications made by yourself. To create a new application, you will need to select “Create app” in the upper right corner of the screen. This will make a form appear asking you to name your application, add a description (optional), place the application in a Data Set (optional), and either build the application from scratch or get inspiration from pre-made templates. There is also an option to import Streamlit application files. Note: Streamlit applications are stored as files in Cognite Data Fusion, so you will need write access to Files to be able to create applications. Once you have filled out the form, a new screen appears displaying the Streamlit application’s Python code and what the application looks like. You can click on “Show / Hide” in the upper right corner to remove both the code editor and the top toolbar, enabling you to view the application on the full screen, similar to what the end user will experience. After creating or editing your application, you can click on “Settings” in the bottom left of the screen. Here you can make changes to the information provided earlier, and a few other choices such as light or dark mode themes. Most importantly, you can select to publish or unpublish your application. Published applications appear in the “Industrial Tools” workspace. More on that in the next section! Using applications - Industrial Tools workspace To access and use the published Streamlit applications, you can navigate to the “Industrial Tools” workspace and select “Custom apps”. You will then be able to view and search for all Streamlit applications published to your Cognite Data Fusion project, given you have the necessary accesses. Note: Since Streamlit applications are stored as files, in order to use an app, you will need Files read access to the Data Set it is stored in. You will also need read access to the necessary Data Resource Types used in the application (e.g Time Series, Data Model instances, etc.). In the case where no Streamlit applications have been published to your project, you will be met with an empty screen guiding you to the documentation on how to build and deploy applications.

Related products:Public BetaStreamlit
featured-image

Product Release Spotlight - June 2024 Release

Hi everyone! 👋 The next major Cognite Data Fusion product release is soon approaching on June 4th. We’re excited to announce lots of new upcoming features across our Industrial Tools and Data Operations capabilities. This post will walk you through selected highlights from the release, including: INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial users Industrial Canvas | collaborative troubleshooting and analysis Charts | no-code time series monitoring Charts | data quality indicators (beta) Fusion Landing Page | workspaces InField | customizable field observations (beta) InField | mobile data explorer InRobot | offline robotic mission planning (beta) Maintain | enhanced activity sequencing (beta) Maintain | shift support (beta) Streamlit | low-code applications (beta) DATA OPERATIONS Auth & Access Management | simplified user management 3D | contextualization and configuration enhancements Extractors | Kafka hosted extractor (beta) Data Workflows | data orchestration API Time Series | data quality status codes Time Series | improved data point queries (beta) These, and much more, can be found in our latest release notes . Check out all the additional features and improvements which will enable your teams to drive even more value from Cognite Data Fusion. We also recommend watching the June Spotlight video: These new capabilities will be available to you on June 4th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial users Official launch of Cognite Search. A streamlined search and data exploration for industrial users. This first major release includes sorting search results by properties, adjusting visible columns in search results, and allowing admins to set default filter combinations for all users. Additionally, the Location (Site) configuration now supports subtree and external ID prefix filtering. These updates simplify data exploration across the portfolio, making it more accessible for industrial users. In the 3D search experience, we’ve also released tools for measuring multipoints, areas, and volumes, as well as rules that trigger color changes on contextualized objects when specific criteria are met. These improvements offer better spatial awareness through advanced measurements and real-time visual alerts, significantly boosting operational efficiency and decision-making. Read more about it here Industrial Canvas | collaborative troubleshooting and analysis Official launch of Industrial Canvas, a powerful tool designed to streamline collaboration on industrial data, allows users to integrate assets, time series, 3D models, and more into an infinite canvas. It includes markups, shapes, lines, versioning, canvas locking, commenting, and sharing capabilities, all with enhanced performance and interactivity. By enabling direct collaboration on contextual OT, IT, and ET data, Industrial Canvas facilitates quicker, higher-quality decisions and reduces the time spent on data collaboration, allowing more focus on production improvements. Read more about it here Charts | no-code time series monitoring Official launch of monitoring in Cognite Charts, replacing traditionally a time-consuming task for automation engineers. Users can now easily create thresholds on time series and set up alerts to get notified when these thresholds are breached. These enhancements reduce the burden of setting up monitoring, allowing anyone to monitor time series data. The alert system ensures proactive investigation when key equipment indicators deviate, improving overall efficiency and responsiveness. Read more about it here Charts | data quality indicators (beta) View time series data quality codes in Cognite Charts, such as “Bad” and “Uncertain”, on both raw and aggregated data points. Additionally, the raw data point limit has been increased to 100,000, and there is improved indication of gaps in calculations when dealing with bad or uncertain data. These updates enable users to trust the data they see, enhancing confidence in their decision-making processes and reducing the need to revert to source systems. Read more about it here Fusion Landing Page | workspaces Persona-based workspaces tailored specifically for Industrial Users and Data Managers address the previous challenge of a mixed interface. This update includes a revamped sidebar and home page, making it easier for users to find the tools and information they need right from the start. The new design simplifies onboarding for both Industrial and Data Expert users and lays the groundwork for future more personalized landing pages. InField | customizable field observations (beta) Configurable observations fit into field workers specific workflows. Ability to create Observations directly from their desktops, and use improved filtering and search capabilities. These enhancements make it easier to review media and discover high-critical findings. By customizing Observations to field workflows, users can ensure better actions and quickly address important issues, improving quality of data from the field, increase overall efficiency and reduce response times. InField | mobile data explorer Official launch of a mobile-only "Search" landing page improves access to relevant data in the field, addressing the issue of siloed information across different source systems. Field workers can now configure various InField workflows, such as observations and checklists, enabling or disabling them as needed. This update provides instant access to crucial data, offering a simple yet scalable solution for field workers. It simplifies deployment and can be expanded to accommodate additional workflows over time, enhancing overall efficiency and troubleshooting capabilities. InRobot | offline robotic mission planning (beta) Plan and create robotic missions offline using digital twins and contextualized 360 images, addressing the high costs and deployment delays associated with manual operator rounds. This feature enables users to define tasks and camera positions based on visual data, allowing the entire robotic mission to be prepared in advance. Once the robot is onsite, it can execute recurrent missions without any additional configuration, streamlining operations and reducing setup time. Maintain | enhanced activity sequencing (beta) Automated sequencing for activities based on defined dependencies and constraints, such as shift duration, day or night work, and overlapping activities, addresses the inefficiencies of manual and Excel-based resource estimation for shutdowns or maintenance campaigns. Users can now gather sets of work orders into multiple sequences and toggle them on and off to observe their impact on the tradematrix. These features improve resource planning, reduce execution delays, and eliminate the cumbersome manual work of creating work order sequences and corresponding tradematrixes. Maintain | shift support (beta) Enhanced Gantt functionality displays both day and night shifts within a single day, along with the corresponding tradematrix for each shift. This update addresses the previous limitation in Maintain, which made it difficult to assess plans involving multiple shifts within the same day. By providing a clear view of resourcing needs for both shifts, this feature improves resource planning, reduces execution delays, and eliminates the need for manual Excel work to create work order sequences and corresponding tradematrixes. Streamlit | low-code applications (beta) Users can now build low-code applications in Python using the Streamlit framework, simplifying the process of deploying applications. This capability addresses the time-consuming need for IT approval and hosting infrastructure. Users can now instantly deploy applications, making them accessible to non-coders. By lowering the burden of app development for citizen developers, this accelerates innovation and decreases the time required to share new solutions within their organization. DATA OPERATIONS Auth & Access Management | simplified user management CDF admins can now manage user access by adding users directly to groups within CDF, bypassing the need for often time-consuming approval processes . Additionally, admins can create a "default" access group for new users. This update eliminates delays, minimizes errors, and reduces the burden on internal approval processes, ensuring a smoother and more efficient onboarding process. As a result, new users experience a better first-time use of the product. Read more about it here 3D | contextualization and configuration enhancements New 3D Scene Configurator aligns all 3D data correctly in relation to each other, addressing the challenge of viewing different models representing the same location together in one view. This improvement simplifies workflows and ensures accurate spatial relationships, providing a more immersive and precise representation of real-world environments for decision-makers in Cognite’s Industrial Tools or 3rd party applications developed with Reveal . Read more about it here Extractors | Kafka hosted extractor (beta) Support for Kafka messages as a hosted extractor eliminates the need for custom extractor development and deployment. This new extraction method allows users to connect to a Kafka broker and subscribe to a single topic directly. By supporting this popular event streaming protocol, the update provides instant connectivity, removing the need for downloading and installing extractors, and significantly streamlines the process for clients already using Apache Kafka. Read more about it here Data Workflows | data orchestration API Official launch of the Data Workflows API allows users to orchestrate workflows that consist of Transformations, Functions, and CDF requests, along with their interdependencies. This update addresses the challenges of fragile pipelines, stale data, and difficulties in monitoring and scaling. By enabling the orchestration and monitoring of these workflows, users can minimize the effort required to manage data processes, significantly enhance the robustness and performance of data pipelines, and gain comprehensive observability of end-to-end pipelines rather than just individual processes. Read more about it here Time Series | data quality status codes Representation of time series data quality is now based on OPC UA standard status codes. This feature addresses the issue of users having to assume gaps in time series data are due to bad quality. By clearly indicating the quality status of each data point, users gain increased trust in the data. They can also choose how to treat good, bad, and uncertain data points in their calculations, preventing the automated removal of low-quality data during onboarding and ensuring more reliable data analysis. Read more about it here Time Series | improved data point queries (beta) Enhanced capabilities for retrieving data points and aggregates address the need for convenient time series data point queries. Users can retrieve data points based on a named time zone or specified time offsets in 15-minute increments. Additionally, data point aggregates can be obtained using Gregorian calendar units such as days, weeks, months, quarters, and years. This feature increases developer speed by providing a convenient API for data retrieval and improves the accuracy of queries by reducing the likelihood of code errors. Read more about it here

Related products:InFieldProduct ReleasesMaintainPublic BetaInRobotIndustrial CanvasChartsSearch and Data ExplorationAPI and SDKsExtractorsData Workflows3DAuthentication and Access Management
featured-image

Data Model UI Updates - April Release

Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. An important aspect of these knowledge graphs is being able to explore and iterate on both its data and its structure. With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to better visualize the containers and views of your data model. Isolate a view or container You can now isolate a view or container by clicking on an item, and choosing from the Quick Filter at the bottom right corner. You can decide between isolating the view or container by itself, or show all other views and containers related to the current selection. You can then further decide which related views and containers to be visible for the selected view or container. You can click “Reset” at the top in the search bar when you want to go back to the original layout when the page first loaded. Clearer relationships visualization Visualizing relationships are clearer for self referencing relationships. As well, arrows clearly identify the direction of a relationship (circle indicating source, arrowhead indicating target) As well, there is a simple way to expand an item to just see the relational properties (edge or direct relations), without seeing all properties. Identifying all Views that is powered by a Container When choosing to view containers within a space, the selected container will list all the views that use the selected container. As part of the release, we have also added or fixed the following: Data management - displays data from reverse direct relations General - all spaces, containers and views are displayed, instead of just the first 1000

Related products:Data Modeling