Skip to main content

Cookie Policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie Settings
  • 185 Product updates
featured-image

New navigation update - June 2024 release

We're excited to announce some significant updates to our platform's navigation and the introduction of Workspaces! Enhanced Navigation: We’ve evolved our navigation from the traditional top bar to a more intuitive and accessible sidebar. This move is designed to make it easier for you to find what you need, when you need it - with fewer clicks and a cleaner look. New side bar navigation Introducing Workspaces Recognizing the diverse needs of our IT users and industrial end users, we´ll rolle out Workspaces. Workspaces are designed to surface the tools and data most relevant to you, reducing complexity and enhancing productivity. Industrial tools workspace: Created with operational teams in mind, this space gathers our industrial tools in one central location, bringing data insights and collaboration tools to the forefront. Data management workspace: Optimized for administrators and IT professionals, this workspace streamlines access to integration, contextualization, validation and system health insights. Admin workspace : Exclusive to users handling access management, this workspace is where you manage permissions and grant users access to necessary resources within CDF. How does it work? You can easily toggle between workspaces with the new sidebar navigation. We’ll also guide you through this process when you log in to CDF for the first time after the release. Additionally, you can collapse the sidebar by clicking the two arrows in the top-right corner. This feature is particularly useful when working within our tools. How does it work? These updates are part of our ongoing commitment to improving your experience and making Cognite the most user-friendly and efficient platform for all your industrial data needs. We can't wait for you to dive into the new experience. We're looking forward to hearing your thoughts on these enhancements.

Related products:Product Releases

Streamlit Low-code Applications - June 2024 release

Hello community! As you may have noticed in the post on the upcoming June product release , we are releasing a new beta feature enabling you to build and deploy low-code data applications using Streamlit🚀 It is often a tedious task to create and share data applications, including even simple dashboards. Considerations such as hosting infrastructure, approvals with IT teams, and availability of data are often blockers for progress. Our new feature allows you to build low-code applications in Python, leveraging the Streamlit framework, and deploying the applications to users instantly to access them. Below you will find a walkthrough of the new experiences for both application builders and consumers. You can also find more information in our documentation . We hope you find this feature as exciting and valuable as we do. As always, we are looking forward to hearing your feedback 😄 Building & deploying applications - Data Management workspace To build and deploy Streamlit apps, you need to navigate to the “Data Management” workspace, expand “Explore”, and select “Streamlit apps”. After clicking on “Streamlit apps”, you will be taken to an overview of existing Streamlit applications. Here you will be able to see all applications created within your Cognite Data Fusion project, and filter the applications already published for use, in addition to applications made by yourself. To create a new application, you will need to select “Create app” in the upper right corner of the screen. This will make a form appear asking you to name your application, add a description (optional), place the application in a Data Set (optional), and either build the application from scratch or get inspiration from pre-made templates. There is also an option to import Streamlit application files. Note: Streamlit applications are stored as files in Cognite Data Fusion, so you will need write access to Files to be able to create applications. Once you have filled out the form, a new screen appears displaying the Streamlit application’s Python code and what the application looks like. You can click on “Show / Hide” in the upper right corner to remove both the code editor and the top toolbar, enabling you to view the application on the full screen, similar to what the end user will experience. After creating or editing your application, you can click on “Settings” in the bottom left of the screen. Here you can make changes to the information provided earlier, and a few other choices such as light or dark mode themes. Most importantly, you can select to publish or unpublish your application. Published applications appear in the “Industrial Tools” workspace. More on that in the next section! Using applications - Industrial Tools workspace To access and use the published Streamlit applications, you can navigate to the “Industrial Tools” workspace and select “Custom apps”. You will then be able to view and search for all Streamlit applications published to your Cognite Data Fusion project, given you have the necessary accesses. Note: Since Streamlit applications are stored as files, in order to use an app, you will need Files read access to the Data Set it is stored in. You will also need read access to the necessary Data Resource Types used in the application (e.g Time Series, Data Model instances, etc.). In the case where no Streamlit applications have been published to your project, you will be met with an empty screen guiding you to the documentation on how to build and deploy applications.

Related products:Public BetaStreamlit
featured-image

Product Release Spotlight - June 2024 Release

Hi everyone! 👋 The next major Cognite Data Fusion product release is soon approaching on June 4th. We’re excited to announce lots of new upcoming features across our Industrial Tools and Data Operations capabilities. This post will walk you through selected highlights from the release, including: INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial users Industrial Canvas | collaborative troubleshooting and analysis Charts | no-code time series monitoring Charts | data quality indicators (beta) Fusion Landing Page | workspaces InField | customizable field observations (beta) InField | mobile data explorer InRobot | offline robotic mission planning (beta) Maintain | enhanced activity sequencing (beta) Maintain | shift support (beta) Streamlit | low-code applications (beta) DATA OPERATIONS Auth & Access Management | simplified user management 3D | contextualization and configuration enhancements Extractors | Kafka hosted extractor (beta) Data Workflows | data orchestration API Time Series | data quality status codes Time Series | improved data point queries (beta) These, and much more, can be found in our latest release notes . Check out all the additional features and improvements which will enable your teams to drive even more value from Cognite Data Fusion. We also recommend watching the June Spotlight video: These new capabilities will be available to you on June 4th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial users Official launch of Cognite Search. A streamlined search and data exploration for industrial users. This first major release includes sorting search results by properties, adjusting visible columns in search results, and allowing admins to set default filter combinations for all users. Additionally, the Location (Site) configuration now supports subtree and external ID prefix filtering. These updates simplify data exploration across the portfolio, making it more accessible for industrial users. In the 3D search experience, we’ve also released tools for measuring multipoints, areas, and volumes, as well as rules that trigger color changes on contextualized objects when specific criteria are met. These improvements offer better spatial awareness through advanced measurements and real-time visual alerts, significantly boosting operational efficiency and decision-making. Read more about it here Industrial Canvas | collaborative troubleshooting and analysis Official launch of Industrial Canvas, a powerful tool designed to streamline collaboration on industrial data, allows users to integrate assets, time series, 3D models, and more into an infinite canvas. It includes markups, shapes, lines, versioning, canvas locking, commenting, and sharing capabilities, all with enhanced performance and interactivity. By enabling direct collaboration on contextual OT, IT, and ET data, Industrial Canvas facilitates quicker, higher-quality decisions and reduces the time spent on data collaboration, allowing more focus on production improvements. Read more about it here Charts | no-code time series monitoring Official launch of monitoring in Cognite Charts, replacing traditionally a time-consuming task for automation engineers. Users can now easily create thresholds on time series and set up alerts to get notified when these thresholds are breached. These enhancements reduce the burden of setting up monitoring, allowing anyone to monitor time series data. The alert system ensures proactive investigation when key equipment indicators deviate, improving overall efficiency and responsiveness. Read more about it here Charts | data quality indicators (beta) View time series data quality codes in Cognite Charts, such as “Bad” and “Uncertain”, on both raw and aggregated data points. Additionally, the raw data point limit has been increased to 100,000, and there is improved indication of gaps in calculations when dealing with bad or uncertain data. These updates enable users to trust the data they see, enhancing confidence in their decision-making processes and reducing the need to revert to source systems. Read more about it here Fusion Landing Page | workspaces Persona-based workspaces tailored specifically for Industrial Users and Data Managers address the previous challenge of a mixed interface. This update includes a revamped sidebar and home page, making it easier for users to find the tools and information they need right from the start. The new design simplifies onboarding for both Industrial and Data Expert users and lays the groundwork for future more personalized landing pages. InField | customizable field observations (beta) Configurable observations fit into field workers specific workflows. Ability to create Observations directly from their desktops, and use improved filtering and search capabilities. These enhancements make it easier to review media and discover high-critical findings. By customizing Observations to field workflows, users can ensure better actions and quickly address important issues, improving quality of data from the field, increase overall efficiency and reduce response times. InField | mobile data explorer Official launch of a mobile-only "Search" landing page improves access to relevant data in the field, addressing the issue of siloed information across different source systems. Field workers can now configure various InField workflows, such as observations and checklists, enabling or disabling them as needed. This update provides instant access to crucial data, offering a simple yet scalable solution for field workers. It simplifies deployment and can be expanded to accommodate additional workflows over time, enhancing overall efficiency and troubleshooting capabilities. InRobot | offline robotic mission planning (beta) Plan and create robotic missions offline using digital twins and contextualized 360 images, addressing the high costs and deployment delays associated with manual operator rounds. This feature enables users to define tasks and camera positions based on visual data, allowing the entire robotic mission to be prepared in advance. Once the robot is onsite, it can execute recurrent missions without any additional configuration, streamlining operations and reducing setup time. Maintain | enhanced activity sequencing (beta) Automated sequencing for activities based on defined dependencies and constraints, such as shift duration, day or night work, and overlapping activities, addresses the inefficiencies of manual and Excel-based resource estimation for shutdowns or maintenance campaigns. Users can now gather sets of work orders into multiple sequences and toggle them on and off to observe their impact on the tradematrix. These features improve resource planning, reduce execution delays, and eliminate the cumbersome manual work of creating work order sequences and corresponding tradematrixes. Maintain | shift support (beta) Enhanced Gantt functionality displays both day and night shifts within a single day, along with the corresponding tradematrix for each shift. This update addresses the previous limitation in Maintain, which made it difficult to assess plans involving multiple shifts within the same day. By providing a clear view of resourcing needs for both shifts, this feature improves resource planning, reduces execution delays, and eliminates the need for manual Excel work to create work order sequences and corresponding tradematrixes. Streamlit | low-code applications (beta) Users can now build low-code applications in Python using the Streamlit framework, simplifying the process of deploying applications. This capability addresses the time-consuming need for IT approval and hosting infrastructure. Users can now instantly deploy applications, making them accessible to non-coders. By lowering the burden of app development for citizen developers, this accelerates innovation and decreases the time required to share new solutions within their organization. DATA OPERATIONS Auth & Access Management | simplified user management CDF admins can now manage user access by adding users directly to groups within CDF, bypassing the need for often time-consuming approval processes . Additionally, admins can create a "default" access group for new users. This update eliminates delays, minimizes errors, and reduces the burden on internal approval processes, ensuring a smoother and more efficient onboarding process. As a result, new users experience a better first-time use of the product. Read more about it here 3D | contextualization and configuration enhancements New 3D Scene Configurator aligns all 3D data correctly in relation to each other, addressing the challenge of viewing different models representing the same location together in one view. This improvement simplifies workflows and ensures accurate spatial relationships, providing a more immersive and precise representation of real-world environments for decision-makers in Cognite’s Industrial Tools or 3rd party applications developed with Reveal . Read more about it here Extractors | Kafka hosted extractor (beta) Support for Kafka messages as a hosted extractor eliminates the need for custom extractor development and deployment. This new extraction method allows users to connect to a Kafka broker and subscribe to a single topic directly. By supporting this popular event streaming protocol, the update provides instant connectivity, removing the need for downloading and installing extractors, and significantly streamlines the process for clients already using Apache Kafka. Read more about it here Data Workflows | data orchestration API Official launch of the Data Workflows API allows users to orchestrate workflows that consist of Transformations, Functions, and CDF requests, along with their interdependencies. This update addresses the challenges of fragile pipelines, stale data, and difficulties in monitoring and scaling. By enabling the orchestration and monitoring of these workflows, users can minimize the effort required to manage data processes, significantly enhance the robustness and performance of data pipelines, and gain comprehensive observability of end-to-end pipelines rather than just individual processes. Read more about it here Time Series | data quality status codes Representation of time series data quality is now based on OPC UA standard status codes. This feature addresses the issue of users having to assume gaps in time series data are due to bad quality. By clearly indicating the quality status of each data point, users gain increased trust in the data. They can also choose how to treat good, bad, and uncertain data points in their calculations, preventing the automated removal of low-quality data during onboarding and ensuring more reliable data analysis. Read more about it here Time Series | improved data point queries (beta) Enhanced capabilities for retrieving data points and aggregates address the need for convenient time series data point queries. Users can retrieve data points based on a named time zone or specified time offsets in 15-minute increments. Additionally, data point aggregates can be obtained using Gregorian calendar units such as days, weeks, months, quarters, and years. This feature increases developer speed by providing a convenient API for data retrieval and improves the accuracy of queries by reducing the likelihood of code errors. Read more about it here

Related products:InFieldProduct ReleasesMaintainPublic BetaInRobotIndustrial CanvasChartsSearch and Data ExplorationAPI and SDKsExtractorsData Workflows3DAuthentication and Access Management
featured-image

Data Model UI Updates - April Release

Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. An important aspect of these knowledge graphs is being able to explore and iterate on both its data and its structure. With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to better visualize the containers and views of your data model. Isolate a view or container You can now isolate a view or container by clicking on an item, and choosing from the Quick Filter at the bottom right corner. You can decide between isolating the view or container by itself, or show all other views and containers related to the current selection. You can then further decide which related views and containers to be visible for the selected view or container. You can click “Reset” at the top in the search bar when you want to go back to the original layout when the page first loaded. Clearer relationships visualization Visualizing relationships are clearer for self referencing relationships. As well, arrows clearly identify the direction of a relationship (circle indicating source, arrowhead indicating target) As well, there is a simple way to expand an item to just see the relational properties (edge or direct relations), without seeing all properties. Identifying all Views that is powered by a Container When choosing to view containers within a space, the selected container will list all the views that use the selected container. As part of the release, we have also added or fixed the following: Data management - displays data from reverse direct relations General - all spaces, containers and views are displayed, instead of just the first 1000

Related products:Data Modeling
featured-image
featured-image

Streamlining Access Management in CDF with Group Membership

Hello!😄 We're pleased to announce the introduction of group membership within groups in Cognite Data Fusion (CDF). This enables admins to add users directly to groups within CDF, simplifying the process of granting access and understanding who the members of a group are by checking it within CDF. It also significantly reduces the burden on the customer's IT team by minimizing (or eliminating) the number of tickets/requests they receive to create groups in their IdP, add users to these groups and then link the IdP groups to groups in CDF. After adding the required capabilities to a group in CDF, admins can now choose how to manage the group's membership - either internally within CDF or continue to manage it via the customer’s IdP, as before. Group membership managed within CDF (New) For group membership managed internally within CDF, there are two options: admins can grant access to capabilities in the group to all users in the organization or grant access to users by explicitly adding them to the group in CDF. 1. Granting access by adding user to a group ​​​​​​CDF admins click on "Create" or "Edit" a group in CDF Adds or modifies all required capabilities and scopes Selects List of users and then adds users as members to the group. NOTE Currently, only users can be added as members to a group in CDF. Support for service account creation within CDF and adding them as members of a group will be available in the June release CDF admins cannot add a user to a group in a CDF project if the user has never logged in to the organization, as user profiles are only created upon successful login to an organization 2. Granting access to all users in the organization CDF admins creates a group in CDF Adds all required capabilities with scopes Selects All user accounts option under members. CAUTION Users can access a CDF project within an organization if they are a member of at least one group in the project. Creating a group with 'All user accounts' in a CDF project grants access to the project, with the capabilities mentioned in the group, to all users in the organization Groups with "All user accounts" as members are prioritized at the top of the list in the group management UI to prevent any unintended access. We recommend creating groups with 'All user accounts' as members containing basic capabilities, such as view only (all read capabilities), to ensure a smoother first-time login experience for new users.

Related products:Authentication and Access Management
featured-image

Search, Data Exploration and Industrial Canvas │ New Features

Hi everyone! 👋 Today we released a range of new functionality, improvements, and bug fixes in both Industrial data search, Data Explorer and Industrial Canvas. Thank you to all who have provided feedback and posted suggestions to us here on Hub - many of you will recognize your feedback incorporated into what gets released today. Industrial data search Our focus since the March release has been around incremental improvements, fine-tuning some interactions, tidying up the user experience, and in parallel, working on some bigger customization functionality that we will release at the end of Q2. For today, we want to share with you this; We have added sorting of search results to make it easier and quicker to find relevant data. You can now sort data (ascending/descending) throughout the user interface. The list of categories can be sorted on name or count it contains The list of search results can be sorted on all properties of the category Left: Sorting on Categories. Right: Sorting on Properties. The list of categories has gotten a face-lift with a default max number of categories shown, and a “show more” option if the number of categories is high. The filters and search queries added as part of your exploration journey are now preserved during your user session to ease the exploration experience. When you start from the list of search results, you can now add filters and/or a search to filter down the list of result, dig into the filtered data, and be able to return to the list of filtered search without the need for going through the filtering steps again. A new Statistics functionality has been added, enabling you to swiftly create visualizations of your data. This feature lets you categorize data based on specific properties and visualize aggregated metrics such as counts and averages. Please note, however, that the Statistics functionality is not supported for all data categories - you can use this feature for the categories listed under Other categories in the left hand sidebar. The Statistics functionality can be used to create visualizations of you data. Our AI Copilot has been further integrated into the search functionality, streamlining the process of identifying categories and creating filters through conversational language. When you know chose to use the Copilot, you will the results appearing in the same interface as when you perform regular search. Additionally, the Copilot is equipped to autogenerate visualizations of statistics, further enriching your data exploration experience. You still trigger the Copilot the same way as before, choosing “Ask Copilot..”. AI Copilot is now more integrated into the Search experience. Some smaller bug fixes include increasing the portion of text we show (like for descriptions), and showing the label name instead of the label ID (thank you for reporting this!) Industrial Canvas Today we have released an improvement to the experience for finding and adding data to a canvas, by unifying the experience with the Industrial data search. When clicking “Add data” you’ll now recognize the experience from Industrial data search, which gives a more visual way of browsing your data, before adding it to the canvas. You recognize how the experience of finding and adding data to your Canvas is unified with the Industrial data search experience. Data Explorer There are two main improvements released today, both a result of feedback received from you here on Hub. To ease and speed up data exploration, we are now preserving the applied filter(s) and search query while you browse in the details mode (they are reset when you exit the details mode). When you apply filter(s) and/or Search query in details mode, these are preserved while you are exploring data We have made some adjustments to filtering logic in the common filters. In the Metadata filter, you’ll find that there are several keys, each typically having several values each. If you select several values for the same key, these are treated with the OR operation, while between keys we now apply the AND operation. Examples: In the Metadata filter, you select the key “network level”, and two values. This selection is treated as: network level = production_subsystem OR production_system. If you select another key in addition, this selection is treated with the AND operation: network level = production_subsystem OR production_system AND product_type = Gross. Combining the Metadata filter with other common or resource-specific filters is treated with the AND operation. Here we have applied the filters: network level = production_subsystem OR production_system AND product_type = Gross Let us know if you have questions or feedback! Best, Sofie Berge, on behalf of Cognite Product Management

Related products:Product ReleasesIndustrial CanvasSearch and Data Exploration

Charts Updates and New Features

Hei Everyone! We're excited to announce the latest updates to Charts, designed to enhance your data visualization and analysis experience. In this release, we've focused on improving data viewing capabilities, introducing new features for better visual analysis, and enhancing monitoring job setup and validation. These enhancements are aimed at providing users with greater flexibility, control, and confidence in leveraging Charts for their data-driven decisions. a) Improved Chart Data Viewing and Trust in the data viewed - Increased the maximum number of raw data points viewable in a chart from 500 to 100,000. - Aggregate min/max shading is now enabled by default when switching to aggregate data. This substantial increase in data points allows for a more comprehensive analysis of data trends in RAW mode without the need to switch to aggregated views prematurely. This min / max shading being on by default should ensure that users are better aware of when they are viewing aggregated data. We hope that these changes should help build better trust in Charts and make the data look more similar to what is viewed in PI systems. b) Slider/Marker Functionality in Charts: - Users can set and delete multiple sliders/markers to compare values across various points in a time series. With this new functionality, users gain the ability to freeze values at specific points in time within a given chart. By setting multiple sliders/markers, users can effectively compare data values across different time series, enabling deeper insights into data trends and patterns. c) Backfilling of Monitoring Jobs - Receive a count of alerts that would have been triggered within the specified time frame when setting up a monitoring job With this latest enhancement, users can now conduct historical runs of monitoring jobs before finalizing their configurations. Upon setting up a monitoring job, users will receive valuable insights into the number of alerts that would have been triggered within the specified time frame based on the configured thresholds. This functionality not only allows users to verify the effectiveness of their threshold values but also helps prevent the potential spamming of alerts by ensuring that thresholds are set at reasonable levels.

Related products:Charts

Exciting Updates: Enhanced Data Integration in Cognite Data Fusion (CDF)

Greetings, everyone! We're thrilled to share some exciting updates regarding our ongoing efforts to enhance data integration within CDF (Cognite Data Fusion). These improvements are aimed at streamlining your experience and empowering you with more capabilities. Let's dive right in! New Features for the File Extractor: 1. OpenText Documentum REST Services Support: We've expanded the capabilities of the Cognite File Extractor to now seamlessly integrate with OpenText Documentum REST services, so that you can connect to multiple sources by only configuring one extractor. Check out the documentation for configuration details here . 2. Advanced Filtering: Enjoy enhanced client-side filtering options based on metadata, allowing for precise filtration and complex rules tailored to your needs. 3. Improved SharePoint Integration: Say hello to improved SharePoint support! We now offer support for multiple SharePoint sites, dynamic site discovery, and the flexibility to configure sites via URL. Plus, we've added the capability to detect deleted files, ensuring your data remains up-to-date. 4. Enhanced File Handling: Say goodbye to size limitations! We now support files larger than 5 GB, facilitating seamless handling of larger files. Updates and Changes: 1. Continuous Background Uploads: File uploads now occur continuously in the background and in parallel, resulting in significantly faster speeds. Additionally, file traversal on sources has been optimized for a smoother experience. 2. Revised Configuration Schema: We've refined the configuration schema for FTP and SFTP, aligning it with other file providers for consistency. Refer to our documentation for detailed parameter names. 3. Improved Retries and Handling: Experience more robust retries and improved handling of interrupt signals, minimizing disruptions during operations. 4 . External IDs Enhancement: We've revamped the external IDs for local and Samba files, now based on file paths instead of underlying inodes. This ensures a seamless transition if you're migrating from a previous version, meaning that your files will be given a new external ID if you are migrating from version 1. Deprecation of Old Documentum Extractor: By adding support for D2 in the Cognite File Extractor, we are discontinuing our Documentum extractor. We encourage users to transition to the more robust File Extractor. While the Documentum extractor will continue to function as usual, please note that bug fixes and support will no longer be provided. Introducing Public Beta Service for Azure EventHub and Kafka: We're excited to introduce a public Beta service to connect with Kafka. Seamlessly access Kafka messages with minimal configuration, and enjoy full hosted integration service within CDF. Thank you for joining us on this journey of innovation and excellence! Stay tuned for more updates and improvements on the horizon. Warm regards, Elka Sierra

Related products:Extractors
featured-image
featured-image

Data Modeling UI Updates - March Release

Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. An important aspect of these knowledge graphs is being able to explore and iterate on both its data and its structure. With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to better display the underlying concepts that power your industrial knowledge graph. With this increased visibility into your models, you can have higher understanding and confidence in iterating the structure of your data model. Space centric exploration of containers and views Explore containers and view information for the data model. Data management summary + sidebar Space centric exploration of containers and views Explore the structures that powers your data model by looking at the space they reside in. When in a space , you can choose the container and view to explore their structure, and the data within. Explore containers and view information for the data model. Within the visualizer and the data model editor, you can now explore the definition of the view, and explore the containers that power them. Beside just the raw configuration, you can also explore them more in a more relational manner via the data model visualizer. Data management summary + sidebar In the data management tab, you can explore data easier with the new sidebar functionality that let’s you look through each instance and their relationship (via double clicking on the property you want to delve deeper into) Additionally, a new profile view enables you to better understand the totality of your data. Get an aggregated summary of the data, grouping by whichever property you would like. As part of the release, we have also added or fixed the following: Data model statistics - fixed progress bar in summary not showing correct amount of nodes and edges Data modeling - unit directive support Data model visualization - inverse relationships not rendered correctly Data management - instances not visible for views that inherited properties from another view Data management - filters with dates, timestamps and sorting are working as expected Data connection - new excel connection string option via “Connection” panel Let us know if you have any questions or feature requests regarding the release below!

Related products:Product ReleasesData Modeling

Product Release Spotlight - March 2024 Release │ Cognite Data Fusion

Hi everyone! 👋 The next Cognite Data Fusion product release is soon approaching. Together with the return of the sun (for those of us in the icy north), we are excited to announce several new upcoming features across our most popular user interfaces and DataOps capabilities. This post will walk you through selected highlights from the release: Increase efficiency of troubleshooting and root cause analysis with updates to Industrial Canvas Improve productivity in Jupyter Notebooks with enhanced AI co-pilot and collaboration Accelerate solution development with new Unit Conversion and Time Series enhancements Increase robustness of data pipelines with Data Workflows Improve performance of your Industrial Knowledge Graphs with upgraded Data Modeling UI There’s also much more to explore and discover. Dive into the latest release notes and uncover all the additional features and improvements designed to take your industrial data journey to new heights. These brand new additions will be available to you on March 5th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! Here’s a short video summarizing the release highlights Increase efficiency of troubleshooting and root cause analysis with updates to Industrial Canvas When troubleshooting complex issues, a lot of time is spent on finding the right data and sharing it across disciplines. Collaborating in a visual, seamless way reduces the time and effort to resolve issues and get your systems back to optimal performance. This release we are introducing new features to Industrial Canvas which improve support of troubleshooting and root cause analysis for our industrial users. In Industrial Canvas, we are enhancing sticky notes so that you can build cause map diagrams for your root cause analysis. Another highly requested feature has been a tighter integration and user experience between Industrial Canvas and Charts. You will now be able to open and view your Charts directly in Industrial Canvas, collecting all essential data in one place, and removing the need to go back and forth between browser tabs. Improve productivity in Jupyter Notebooks with enhanced AI co-pilot and collaboration Jupyter Notebooks in Cognite Data Fusion allows you to write and run Python scripts without leaving your browser. The notebooks use your current user credentials when logged in to Cognite Data Fusion, which removes the need for additional authentication steps and risk of getting access to restricted data. Since its initial launch late last year, Jupyter Notebooks has become an increasingly popular feature for our more technical users such as data scientists and engineers, being used to explore data and prepare for building data pipelines and solutions. In this release we are further improving Jupyter Notebooks’ AI copilot. The copilot has received a major visual overhaul making it simpler to use, together with elevated knowledge of Cognite Data Fusion and its Python SDK increasing its accuracy. You will also find an upgraded user interface showing where notebooks are stored. This allows you to differentiate between notebooks stored inside or outside of Cognite Data Fusion, enabling sharing of individual notebooks and prevention of potential data loss. Increase robustness of data pipelines with Data Workflows All data in Cognite Data Fusion rely on industrial data being efficiently onboarded and contextualized, while maintaining the quality and trust in the data. Data managers, data engineers, and data scientists usually manage several data processes to prepare Industrial Knowledge Graphs and other solutions for their organization’s consumers. The introduction of Data Workflows allows you to orchestrate execution of Cognite Data Fusion’s data processing features, such as Transformations and Functions. This enables significantly increased observability and control of the end-to-end data pipelines, leading to improved performance and faster scaling. Accelerate solution development with new Unit Conversion and Time Series enhancements Cognite Data Fusion’s core data resource types, such as Data Modeling and Time Series, are extensively used in creation of Industrial Knowledge Graphs and solutions tailored for solving industry specific problems. Enabled by Cognite Data Fusion’s DataOps capabilities and SDKs, solution builders are often setting up calculations and similar processes to prepare their industrial data to fit their use cases. In this release, we are introducing a set of new features for Unit Management and Time Series aiming to expand this toolkit. These new features cover several commonly requested gaps in Cognite Data Fusion, which we believe will simplify and accelerate the experience with exploring and consuming relevant data, and deploying solutions on top of it. Building upon our latest Unit Management features , which allows you to assign units of measure to Time Series instances and Data Modeling properties, you will now be able to consume data in the unit of measure relevant to you. This allows you to spend less time on error prone unit conversions, both for simple and one-off data exploration to more complex solution building. For Time Series, we are introducing Subscriptions and Data Quality Status Codes. Up until now, our users have had to download the full range of data points to get updates on new or changed data points. This increases the risk of performance challenges, especially for solutions or processes dependent on large data volumes. The new Subscriptions feature allows you to set up one or more Subscriptions across Time Series, and obtain only the latest and changed data instances that are to be used in e.g. dashboards or calculations. Data Quality Status Codes allows you to represent the data quality of Time Series data points according to the OPC-UA standard. This provides you with the option to choose how to handle data instances not considered good enough, instead of them automatically being removed when being ingested into Cognite Data Fusion from source systems. The added insight into the quality of data instances will increase trust in the industrial data, both for solution builders and data consumers. Improve performance of your Industrial Knowledge Graphs with upgraded Data Modeling UI Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. This increases the risk of the Industrial Knowledge Graph having reduced performance for its consumers, such as query timeouts and slow response times. With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to show how relationships between data is stored, and what can be queried. This allows you to better understand how you can optimize your graph to delivert the best possible performance and stability for your consumers. You can find more detailed information on the updates in the release notes on our Documentation portal.

Related products:Product Releases
featured-image

Embark on your data science journey with Jupyter Notebooks in Fusion

Jupyter is a tool loved by data scientists, analysts, and engineers, particularly for its interactive computing environment and robust data processing capabilities. This platform allows users to dynamically write and execute code, visualize results in real-time, and efficiently handle large datasets, making it an essential tool for in-depth data analysis and exploration. Using Jupyter Notebooks in Fusion, you can now write and run Jupyter Notebooks in Fusion without leaving your browser. Notebooks can be stored in Cognite Data Fusion (CDF) allowing easy access and sharing. All code is executed in a “sandbox” within your browser, reducing the chance of leaking information. In this sandbox, notebooks do not have access to your local files or information stored on your computer. All communication with CDF and notebook code is executed under the current user credentials, so Jupyter cannot be used to gain access to restricted data stored in CDF. Paired with our new GenAI code copilot, we have made it easier than ever to get productive with CDF. The copilot will help you write and explain code. Soon, the copilot will also help you fix code errors. If you are new to Jupyter, we recommend first reading the official JupyterLab introduction guide before reading the rest of this guide. Join the Beta: Your Feedback Matters Jupyter Notebooks in Fusion is in public beta. As with any beta program, some level of instability and unforeseen issues may be expected. We strongly encourage users to actively participate in this beta phase by providing feedback, which is invaluable to us. For product interaction and reporting bugs, please visit Cognite Hub . If you have ideas for improvements you would like to share, please create a new product idea on Cognite Hub . Your contributions are not only welcome but essential in making JupyterLab in Fusion a more robust and user-friendly tool. Using Jupyter Notebooks in Fusion This chapter guides you through the process of using Jupyter Notebooks within the Cognite Fusion environment, detailing how to access, store, and manage your notebooks. You will also learn how to authenticate with CDF to access information stored there and how the new copilot can help you accelerate the process of writing code. Jupyter Notebooks is available under the Explore menu in Cognite Fusion. Once started, previous Jupyter Notebooks users will recognize the familiar user interface. Notebooks and files are accessible in the file browser. Storage The first time you open Jupyter Notebooks, you will notice a few folders already available: examples/ - This folder contains some interesting examples of things you can do with JupyterLab in Fusion as an inspiration. quickstart/ - A quickstart guide for getting started with data modeling and population of data. CDF ☁/ - A special folder for loading and storing notebooks to Cognite Data Fusion. There are two different ways of storing notebooks and data in Jupyter Notebooks in Fusion; locally or in Cognite Data Fusion (CDF). By default, all files and notebooks are stored within the browser storage. Since this storage can be cleared (e.g. as a result of clearing the browser cache) it is recommended to only use this storage for small notebooks that temporary files you can afford to lose. Locally stored notebooks are accessible across projects - that is, you will have access to notebooks from CDF project A when you are working in project B. To store persistent notebooks and share work with your colleagues, use the CDF-folder. Inside this folder, you will find each data set you have read-write access to. If you already have a data set you want to store notebooks in - great! If not, see section below for creating data sets for notebooks. To save a notebook to Cognite Data Fusion, simply place it inside the respective folder in the file explorer in Jupyter. Note that only notebook files with IPYNB-extension will be saved to CDF - no other files will be synchronized. Authentication helpers When using Jupyter Notebooks in Fusion, it’s recommended to use the default authentication helpers provided. To use this helper, simply add the following code to your notebook: from cognite.client import CogniteClient client = CogniteClient() This authenticates using the current user credentials, meaning that any code that is run in Jupyter Notebooks in Fusion when using the authentication helper has the same access permissions as the user. GenAI code assistant Jupyter Notebooks in Fusion comes with a useful coding assistant. This assistant will help you write code based on natural language input and explain existing code. Later it will also be able to help you fix code errors. The code copilot is accessible from the cell editor in Jupyter. You can use the copilot to generate and explain code. Later we will also add functionality to help you fix code with errors. Best practices for succeeding with the code copilot includes: Be explicit. If you want the copilot to work on data you retrieved in previous cells, state the name of those variables. You also might want to hint as to what type of those variables are, especially if it's not obvious from the code in prior cells. Use Cognite Data Fusion lingo to help the copilot understand what APIs are relevant. An example of this is that if you are retrieving work order data, provide information on how to retrieve this data - not just instruct the copilot to retrieve work orders. Instruct the copilot to do one single task per cell. For performing additional tasks, add additional code cells. Manually combine code from multiple cells after verifying that the code works. Write instructions in English. Other languages might work, but expect best accuracy when using English. Always read through the generated code before running to make sure the code doesn’t perform harmful actions. This is especially important if you have elevated access rights to CDF to ensure data integrity. Expect to be required to change parts of the generated code. The copilot will make mistakes, but can help you point in the right direction. Some examples of prompts you might want to try out is: Initialize a Cognite client Retrieve all root assets Search for documents by "Operating procedure" and print content of the 5 first documents Fetch time series value for the last year for time series with external ID "XYZ". Plot the time series and print min, max and mean values. Note that this is an early version of the code assistant. Expect accuracy to improve as we develop our coding assistant over the coming months. GenAI data insights We also introduce a new Python package, cognite-ai , that will help you use GenAI not only as a coding assistant, but also to simplify data insights in your code. This package is based on PandasAI and allows you to “talk to your data frames” from CDF An example notebook is available under the examples/-folder. Note that this Python package also can be used outside Jupyter Notebooks in Fusion. This package is in an experimental state. Security considerations When you are using the default authentication helper, all code executed in Jupyter Notebooks in Fusion is run under the current user credentials meaning Jupyter Notebooks will not provide access to any data the user doesn’t already have access to. Notebooks are stored in CDF and/or browser storage. Local cached versions of notebooks are kept in browser storage. It’s advisable to clear notebook cell outputs if sensitive information is listed here. Never store secrets in notebooks, even for private notebooks. Limitations and known issues Known issues: Note that due to restrictions in the underlying technology, you will only be able to have one browser tab/window with Jupyter Notebooks open at the same time. The link generated from “Copy sharable link” file context menu does not work Packages with native libraries are not supported. When you see an error message like this: Can't find a pure Python 3 wheel for ‘<package>’. See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel . You can use `micropip.install(..., keep_going=True)`to get a list of all packages with missing wheels. it typically means that the package requires a native library and cannot be used with Pyodide. See Pyodide document for more information. It’s not possible to interrupt execution of notebook cells. Advanced: preparing datasets for sharing notebooks In order to store notebooks in Cognite Data Fusion (CDF), you must have access to reading and writing files to a Data set . To share notebooks within your organization all collaborators must have file:read and file:write ACLs. Access to managing Data sets is usually restricted, and you might need a CDF administrator to help you prepare Data set(s) and manage access.

Related products:Jupyter Notebooks
featured-image

Product Release Spotlight - January 2024 Release │ Cognite Data Fusion

New year - New release! Welcome to 2024, the year that brings easy accessibility and seamless utilization of complex industrial data, thanks to the power of Generative AI. Easier to get started with the new landing page on Cognite Data Fusion Finding data for industrial users is now vastly improved in the new search experience (including both (F)DM and Asset Centric) Engineers doing collaborative troubleshooting and root cause analysis are supercharged by key features requested by our core users in Industrial Canvas In direct response to your feedback, we’ve improved monitoring of equipment and processes with editing, highlighting and more context in alert emails And finally for all industrial users, first class Units support! And that’s not all, there’s much more to explore and discover. Dive into the latest release and uncover all the additional features and improvements designed to take your industrial data journey to new heights. These brand new additions will be available to you on January 9th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here on the community, to help us understand we can continue to tailor Cognite Data Fusion to your needs. Let's keep this momentum going! Here’s a short video summarizing the exciting updates Easier to get started with the new landing page on Cognite Data Fusion Working in critical roles we know how important it is to get started fast, and get straight to work. This is why we are launching a new, focused landing page for Cogniter Data Fusion with quick access to Search, Industrial Canvas and Charts. Your most recent work is highlighted for you to get right back where you left off. All of your powerful features of Cognite Data Fusion are available in the menu, with quick access to what you need. Finding data for industrial users is now vastly improved in the new Search experience With the launch of the brand new Industrial data search experience we are simplifying how you can search, browse and find your industrial data across different types of data. Search and browse across 3D, sensor data, equipment, documents, work orders and more, all in one place. The refreshed experience is optimized for industrial end users looking for quick answers across complex data. This includes built in Generative AI search where you can use natural language to get simple answers to complex questions powered by your industrial knowledge graph. Supercharge your visual troubleshooting with Industrial Canvas When troubleshooting complex issues, most time is spent on finding the right data and sharing it across disciplines. Collaborating in a visual, seamless way reduces the time and effort to resolve issues and get your systems back to optimal performance. With this release we are introducing more control when collaborating visually in Industrial Canvas. You can lock down root cause analyses to prevent changes after an issue is resolved. You can also roll back to earlier versions of an analysis through automatic snapshots of your work. Enrich your P&IDs and technical documents with a built in symbol library, and export all markups as PDF. Here you will also be guided by Generative AI, where all documents can be summarized and interrogated for questions. Long, technical documents can give you structures answers to your questions, with references to where in the document the answer was extracted. Improved Monitoring of equipment and processes When monitoring critical equipment and processes, you now get more relevant context and information in alerts emails. You can jump straight into the relevant chart and start troubleshooting, or unsubscribe directly from the email. You also have the ability to edit and tune your parameters of ongoing monitoring jobs, improving the overall workflow of monitoring your plant. First class Units support Across our APIs and Python and Javascript SDKs we now have full support for industrial units through our Cognite Unit Library. All features and examples are described in our documentation and examples. You can find more detailed information on the updates in the release notes on our Documentation portal.

Related products:Product Releases

Document Parsing - Using generative AI to assist in Data Management

Dear Community, We would like to provide an update on our ongoing efforts to address the challenges associated with unstructured data. We have developed this closely with AkerBP while concurrently identifying areas where such a tool could deliver significant business value. Throughout the process we have identified that this tool could be used in the following applications such as Material Master data management, Brownfield modification, and management of Asset Lifecycle Information. Significance of the Initiative: The primary benefit is a reduction in the time spent on wrestling with tedious data extraction, allowing users to allocate more time to meaningful tasks. Our document data extraction tool not only saves time but also serves as a safeguard against errors often associated with manual input. Furthermore, we are looking to develop tooling and processes to mitigate and eliminate errors generated by the LLM. Issues Addressed: Problem: Manual data work resembling an endless endeavor. Key Value Driver: Time savings Problem: Errors and discrepancies resulting from a high volume of manual input Key Value Driver: Error minimization Problem: Applications hindered by sparse data Key Value Driver: Establishment of a robust data foundation for applications Our focus is on streamlining processes to establish a singular, definitive digital version, eliminating the need for navigating through multiple iterations. This approach adds tangible business value, particularly in terms of time savings and error reduction, benefiting users who rely on accurate data for various applications. The implementation aims to provide enhanced efficiency and reliability in data handling. For more information on the Data Management journey in Aker BP, feel free to watch the webinar here.

Related products:Contextualization

Deprecation Notice for AIR and Introduction of New Monitoring Capabilities in Cognite Charts

Hi CDF Users, This is a reminder notice for the deprecation of AIR. The alpha-level maturity monitoring solution AIR will be deprecated on 2023/12/31 and we will no longer provide support for the AIR after this date. To ensure a seamless transition and to provide you with improved monitoring capabilities, we are excited to introduce our new solution in Charts which is in open beta and is now available to all customers. Key Features of Monitoring in Charts: Enhanced User Experience: Monitoring in Charts has a better user interface, providing a user-friendly experience for proactive troubleshooting and root cause analysis. Integration with No Code Calculations: Our new solution allows for effortless integration with no code calculations, streamlining the monitoring process and empowering users to derive insights without the need for extensive coding. Stability and Reliability: Monitoring in Charts is built with stability and reliability in mind. Scalability: Monitoring in Charts is built on a backend API that is better suited for scaling making it an overall better solution instead of AIR as your monitoring needs grow. We believe that the transition to Charts will bring significant benefits to your monitoring needs. We appreciate your understanding and look forward to continuing our partnership with you as we embark on this exciting new chapter in monitoring capabilities within CDF.

Related products:Charts

Early Adopter Product Release Spotlight - October 2023 Release

Dear Community, We're thrilled to share the latest enhancements from our October 2023 product release, exclusively available through our Early Adopter Program! Introducing Data Workflows: Revolutionize your pipeline with our new managed process orchestration service, Data Workflows, integrated within Cognite Data Fusion. With Workflows, you gain the power to seamlessly coordinate and execute interdependent processes, including transformations, functions, requests to CDF, and dynamic tasks, all in perfect order and on time. This service is currently accessible exclusively to our Early Adopters. We're eager to hear your valuable feedback! Join the group and request participation here. Explore Industrial Data with Ease: Empowering industrial users, regardless of technical expertise, our Industrial Data Search feature simplifies the process of discovering relevant data. You can uncover detailed results for your industrial context, and refine your search down to specific site/asset details. Further, you can explore data through multiple dimensions like 3D, AI search, and standard list view, equipped with an array of powerful filter options. Dive in and be part of our Early Adopter Program! Join the group and request participation here. We can't wait to hear about your experiences with these new features! You can find details on the full October 2023 product release here.

Related products:Product Releases

Product Release Spotlight - October 2023 Release │ Cognite Data Fusion

We are excited to announce the October 2023 release of Cognite Data Fusion! This post covers some of the significant highlights of the release: Enhanced data exploration AI-powered data exploration Manual contextualization of 3D models Saving no-code calculations Monitoring asset health indicators Jupyter Notebooks with Cognite AI Copilot These enhancements will be made available on October 31, 2023. We would love to hear your feedback here on Cognite Hub over the coming weeks. You can also find more detailed release notes on our Documentation portal, as well as the October Roadmap update on Product Updates. This month, we focus on data onboarding. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. Let's keep this momentum going! Here’s a short video summarizing the exciting updates Enhanced data exploration We are happy to share our newest enhancement to the search experience within Cognite Data Fusion, furthering our ambitions to deliver easy access to complex industrial data. This feature provides user-friendly search and enhanced context for industrial users, such as engineers, technical experts, and operational teams. You can narrow your search to a specific site - like an individual offshore asset or a processing facility - to make the search results more relevant. The search results are grouped and presented in an industrial context and can be viewed through a list view and a 3D view with a wide set of filter capabilities across all given data types. You can easily change between layered models to see CAD models, 360 images, and point cloud models. AI-powered data exploration As an advancement to traditional search and filtering of data models, users can now search using AI. You can toggle on AI and find data by using the example questions or write your own using natural language. You can at any time adjust the generated filters and also view the aggregates as charts and text summaries. Manual contextualization of 3D models 3D models such as CAD or laser scans don’t always contain the required information to link the model to assets, and current workflows require a fully contextualized 3D model to enable you to really benefit from the power of 3D. If you're adding 3D models to Cognite Data Fusion, you can now do manual contextualization of both CAD models and point cloud models in Cognite Data Fusion. The asset links can be reflected in all Cognite applications using the Reveal 3D viewer , enabling users to maximize value capture from applications such as InField and Maintain or any bespoke customer applications built with Cognite Reveal. Saving no-code calculations If you're using CDF to create and run calculations, you can now create, save, and schedule the calculations in Charts . This eliminates the need to create Python scripts and deploy functions. The scheduled calculations can be consumed on external dashboards or used as input to advanced models. Monitoring asset health indicators Monitoring asset health indicators and performance KPIs is one of the key jobs of engineers and subject matter experts when evaluating asset health. We’re now adding the ability to set up threshold-based monitoring in Charts, using raw time series or the saved no-code calculations. You can also enable email notifications to be sent out when thresholds are reached for the monitoring jobs. Jupyter Notebook with AI copilot Data scientists and data engineers can now work with and run Python scripts from Cognite Data Fusion without setting up infrastructure. This is a valuable tool when combining the power of Cognite Data Fusion with external tooling, such as data analytics packages, and when you need to enrich your data using custom logic. Notebooks run in a sandboxed environment inside the browser under the current user credentials, ensuring data integrity. We’re also introducing a code copilot integrated with Jupyter Notebook. By leveraging GenAI, the copilot will help you write and understand code. Code generated from natural language helps you get to value quicker by reducing the need to write boilerplate code or do advanced data analytics. We now support natural language data insights using large language models in notebooks. This allows you to ask questions about data retrieved from Cognite Data Fusion or to generate plots. You can find the product documentation here Early adopter programs Data workflows is a new managed process orchestration service within Cognite Data Fusion. Using CDF Workflows, you can orchestrate the order and timely execution of interdependent processes such as transformations, functions, requests to CDF, and dynamic tasks. The service is only available by joining the Early Adopter Program. Request participation through the group on Cognite Hub. Industrial data search enables industrial users without technical background to easily find relevant data. Search results show the industrial context with the ability to narrow down the search to a specific site/asset, and with several lenses/dimensions of searching and viewing data, such as via 3D, AI search, and standard search with list view and a range of filter capabilities. Please come and join us in the Early Adopter Program. Request participation through the group on Cognite Hub.

Related products:Product Releases

Cognite Data Fusion October Roadmap Update: Data Onboarding

As fall moves closer to winter, it is again about time to provide a short update on the roadmap of Cognite Data Fusion, and share some of the exciting work that is going on in different parts of the product. To avoid overloading you with information, we will split this update into several sections, allowing us to provide more content but in bite-sized chunks that are easier to digest - and to produce. We’ll be submitting a set of updates in the weeks to come, so please stay tuned for more news on other parts of the product. As always, we’d love to hear your feedback, questions and ideas right here on Cognite Hub. Data Onboarding deals with the job of extracting data from source systems, loading it into Cognite Data Fusion, and processing the data to build the industrial knowledge graph. This also includes contextualization, the technologies that enable Cognite Data Fusion to find connections and relationships between data even if the connecting data elements are not obvious, and thus build a knowledge graph which is richer than the data it is created from. A rich connector marketplace, and ease of setting up connectors The catalogue of systems that Cognite Data Fusion supports with packaged extractors is ever growing - our goal is to hit 100+ by the end of the year. The list of systems that are coming up is too long to describe here, but you can look forward to both support for new systems such as Aspentech’s Enterprise Historian IP.21, support for a wider range of SAP versions and protocols, as well as significant improvements to existing capabilities for systems like relational databases and Documentum. One fun and suprisingly useful piece of connectivity coming up shortly is the ability to connect directly to Excel and Google Sheets spreadsheets and use these as tabular sources of data. In addition to building a richer catalogue of connectors, we are also working on making it easier to host and run extraction jobs directly in Cognite Data Fusion, whenever the upstream system is reachable from the cloud. Hosted extractors provide a very simple “click and configure” experience, making it very easy to set up new extraction pipelines directly from the UI of Cognite Data Fusion, as well as providing scalable performance and ease of monitoring and operating the extractors over time. Data Workflow Orchestration - streamline and turbocharge your data pipelines Towards the end of the year, we will be shipping the first release of a new way of transforming and working with data being onboarded to Cognite Data Fusion. We are moving beyond the current capabilities of the transformation pipeline, and introducing the concept of a data workflow, which contains a set of data onboarding t asks (These can be Spark Transformations, Cognite Functions, or many other tasks) that are performed on the data to be processed, either in parallel or in sequence. These tasks can be triggered by schedules, external events or by previous tasks, enabling data workflows to move from schedule-based workflows to being triggered by updates and proceed to execute a set of tasks efficiently and without delay. This will dramatically increase the capabilities, predictability and performance of the data pipelines in Cognite Data Fusion. Not satisfied with improvements under the hood, we also aim to ship a rich user experience for working with data workflows as part of the product. Look forward to the first iterations of this towards the end of the year, and continuing on into 2024. Conceptual designs for data workflow editor Contextualizing engineering diagrams We are targeting major improvements to the capabilities of Cognite Data Fusion of extracting context and meaning from engineering diagrams, more specifically Piping and Instrumentation Diagrams (commonly referred to as P&IDs). These diagrams are core for understanding system context of an industrial asset. We already support extracting tags from P&IDs, allowing them to be linked to assets (and thus to timeseries data, 3D models and more), but we are now looking to take the next step, and extract more knowledge from the symbols in the diagram. Our goal is to build a tool that allows you to identify symbols, connections and flows of fluids (or gas) in a system. This tool will be possible to train on the particular standards and symbolic conventions used in your company, and will allow for human-assisted contextualization of P&IDs where you can train the system to gradually do more and more of the work. This will unlock new levels of richness in the industrial knowledge graph, and enable new use cases with the unique data that will only be available in Cognite Data Fusion. Look forward to the first updates and a richer contextualization tool for P&IDs early in 2024, with subsequent improvements coming later in the year. Example engineering diagram showing rich data contextualization Onboarding data from documents using Generative AI Lastly, no roadmap update is complete without mentioning at least one intiative fueled by the rapid evolution of generative AI. This time around, we’d like to highlight some work we are doing on leveraging generative AI to extract data from documents, and onboard that data into Cognite Data Fusion. This is an application that has many use cases, a simple one being the ability to create a digital representation of PDFs containing equipment data sheets, where the values specified (for things like maximum and minimum operating temperatures, pressures, nominal power consumption, etc) can then be used as actual data (complete with lineage, units of measure, etc.) inside Cognite Data Fusion in things like monitoring jobs, analysis, troubleshooting and data science. In other words, onboarding actual data from a PDF file will feel just as simple as onboarding data from a table in a database. Look forward to seeing the first product releases with support for leveraging generative AI to onboard data from documents in early 2024. If you want to follow our development and shaping of these and many other features of Cognite Data Fusion, use Cognite Hub to engage with us and stay up to date with the latest developments. Cognite Roadmaps are forward-looking and subject to change.

Related products:Product Roadmap