Time Series New Features - Time Zone and Monthly Aggregrates

Dear Cognite Data Fusion Users,We are delighted to announce the general availability of the support for time zones and monthly aggregates API feature for Time Series.This feature allows API users to retrieve time series aggregated data using:A named time zone, using time zone names from the IANA time zone database A specific time offset in 15 minute increments Monthly time granularity, in addition to the existing Second, Minute Hour and Day granularitiesTime ZonesWhen using a named time zone to retrieve the data, the API applies the appropriate offset to the aggregated values of the data points in order to return the correct selection of data.  The feature works for all named time zones, including those with a 15 minute offset such as Nepal.Aggregations by MonthWhen retrieving data using the new month granularity, users can specify an aggregate bucket size of any useful number of whole months.  When retrieving aggregated data, the API will automatically return aggregated values for the month(s) contained within the query, and will automatically account for the number of days in the month, including leap years, and including daylight savings time adjustments.The feature is available on both Time Series and Synthetic Time Series queries, and support is implemented in both the Python SDK and Cognite Data Source for Grafana.For more details on how to use the new features, please consult our Developer Guide and API specification.

Related products:API and SDKs

Gain Deeper Data Insights with New Inspection end-points for Views and Containers

We're excited to announce the release of powerful new features in our APIs that empower you to gain deeper insights into your data and simplify container management. With these new tools, you can:See how different parts of your schema connect. Find out exactly what data is in your instances. Get a clear picture of your data structure. Make informed decisions when deleting data, knowing the full impact.Easier Container Management (DMS API only):Identify Linked Views: Effortlessly locate all views associated with a particular container. Comprehensive View Count: Gain an accurate total of all views connected to a container, including those potentially outside your access.Inspecting Instances (DMS API only):Pinpoint Data Containers: Quickly identify which containers store data for a specific instance. Mapped View Clarity: Easily see the views that map the containers housing your instance data.These innovative features empower you to:Strengthen Data Understanding: Gain deeper knowledge about your data structure and relationships for improved decision-making. Boost Efficiency: Save time and effort with intuitive inspection tools that provide a clear view of your data.Get started exploring your data like never before! We encourage you to review the updated documentation for detailed instructions on leveraging these exciting new functionalities. 

Related products:Data Modeling

Charts New Features - June 2024 Release

Hei Everyone! We're excited to announce the latest updates to Charts, designed to enhance your data visualization and analysis experience. In this release, we've focused on improving data viewing capabilities, introducing new features for performance monitoring, and maturing the monitoring and alerting capabilities in Charts. These enhancements are aimed at providing users with greater flexibility, control, and confidence in leveraging Charts for their data-driven decisions. 1. Time Series Status Codes in Charts We’ve enhanced the Charts time series viewer to display not only good data points but also bad and uncertain data. Now, uncertain data points are shown with grey shading, while bad data points are shown as gaps. This improvement enhances trust in our product and allows for more accurate data representation.Before: If you look at this chart carefully you will notice lines being drawn between gaps leading to an impression that the data is being interpolated. This lead to an inaccurate representation of the data from the source systems in Charts. After: If you look at this chart carefully you will now notice some visual improvements on the time series viewer. We now show uncertain data points with a grey shading while the bad data is represented as a gap. This leads to a much more accurate representation of the data as seen in the source systems leading to more trust in Charts and a more reliable RCA / troubleshooting job. Note: To enable these indicators, please make sure to contact the relevant CDF individual to help you get this data from your source systems into CDF by updating the extractors.2. Monitoring and Persisted Calculations in GA Both monitoring and persisted calculations have matured from Beta to production.These capabilities are crucial for the troubleshooting/RCA process, and we’re excited about the positive impact on our customer base. Learn more: Monitoring Documentation Persisted Calculations Documentation 3. Live Mode in Charts We’re delighted to introduce Live Mode in Charts! You’ll now see a heartbeat icon on the top side of the Chart bar. Toggling this setting enables auto-refresh, so you can see new data points in CDF automatically without having to move around or click.      Bug Fixes:Fixed persisted calculations preview not working: The preview functionality for scheduling calculations now works correctly. Fixed an issue causing charts to crash when zooming out: Charts no longer crash due to a stack trace issue when zooming out. Plus, we’ve resolved around 35 backend and frontend bugs to improve performance and user experience.Thank you for your continued support and feedback. We hope you enjoy these new features and improvements. As always, your feedback is invaluable to us.Happy Charting! The Charts Team

Related products:Charts
featured-image

New navigation update - June 2024 release

We're excited to announce some significant updates to our platform's navigation and the introduction of Workspaces! Enhanced Navigation: We’ve evolved our navigation from the traditional top bar to a more intuitive and accessible sidebar. This move is designed to make it easier for you to find what you need, when you need it - with fewer clicks and a cleaner look. New side bar navigationIntroducing WorkspacesRecognizing the diverse needs of our IT users and industrial end users, we´ll rolle out Workspaces. Workspaces are designed to surface the tools and data most relevant to you, reducing complexity and enhancing productivity.Industrial tools workspace: Created with operational teams in mind, this space gathers our industrial tools in one central location, bringing data insights and collaboration tools to the forefront. Data management workspace: Optimized for administrators and IT professionals, this workspace streamlines access to integration, contextualization, validation and system health insights. Admin workspace: Exclusive to users handling access management, this workspace is where you manage permissions and grant users access to necessary resources within CDF.  How does it work?You can easily toggle between workspaces with the new sidebar navigation. We’ll also guide you through this process when you log in to CDF for the first time after the release.Additionally, you can collapse the sidebar by clicking the two arrows in the top-right corner. This feature is particularly useful when working within our tools. How does it work? These updates are part of our ongoing commitment to improving your experience and making Cognite the most user-friendly and efficient platform for all your industrial data needs.We can't wait for you to dive into the new experience. We're looking forward to hearing your thoughts on these enhancements. 

Related products:Product Releases

Streamlit Low-code Applications - June 2024 release

Hello community!As you may have noticed in the post on the upcoming June product release, we are releasing a new beta feature enabling you to build and deploy low-code data applications using Streamlit🚀It is often a tedious task to create and share data applications, including even simple dashboards. Considerations such as hosting infrastructure, approvals with IT teams, and availability of data are often blockers for progress. Our new feature allows you to build low-code applications in Python, leveraging the Streamlit framework, and deploying the applications to users instantly to access them.Below you will find a walkthrough of the new experiences for both application builders and consumers. You can also find more information in our documentation.We hope you find this feature as exciting and valuable as we do. As always, we are looking forward to hearing your feedback 😄 Building & deploying applications - Data Management workspace To build and deploy Streamlit apps, you need to navigate to the “Data Management” workspace, expand “Explore”, and select “Streamlit apps”. After clicking on “Streamlit apps”, you will be taken to an overview of existing Streamlit applications. Here you will be able to see all applications created within your Cognite Data Fusion project, and filter the applications already published for use, in addition to applications made by yourself. To create a new application, you will need to select “Create app” in the upper right corner of the screen. This will make a form appear asking you to name your application, add a description (optional), place the application in a Data Set (optional), and either build the application from scratch or get inspiration from pre-made templates. There is also an option to import Streamlit application files. Note: Streamlit applications are stored as files in Cognite Data Fusion, so you will need write access to Files to be able to create applications.  Once you have filled out the form, a new screen appears displaying the Streamlit application’s Python code and what the application looks like. You can click on “Show / Hide” in the upper right corner to remove both the code editor and the top toolbar, enabling you to view the application on the full screen, similar to what the end user will experience. After creating or editing your application, you can click on “Settings” in the bottom left of the screen. Here you can make changes to the information provided earlier, and a few other choices such as light or dark mode themes. Most importantly, you can select to publish or unpublish your application. Published applications appear in the “Industrial Tools” workspace. More on that in the next section! Using applications - Industrial Tools workspace To access and use the published Streamlit applications, you can navigate to the “Industrial Tools” workspace and select “Custom apps”. You will then be able to view and search for all Streamlit applications published to your Cognite Data Fusion project, given you have the necessary accesses. Note:  Since Streamlit applications are stored as files, in order to use an app, you will need Files read access to the Data Set it is stored in. You will also need read access to the necessary Data Resource Types used in the application (e.g Time Series, Data Model instances, etc.).  In the case where no Streamlit applications have been published to your project, you will be met with an empty screen guiding you to the documentation on how to build and deploy applications.

Related products:Public BetaStreamlit
featured-image

Product Release Spotlight - June 2024 Release

Hi everyone! 👋The next major Cognite Data Fusion product release is soon approaching on June 4th. We’re excited to announce lots of new upcoming features across our Industrial Tools and Data Operations capabilities. This post will walk you through selected highlights from the release, including:INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial users Industrial Canvas | collaborative troubleshooting and analysis Charts | no-code time series monitoring Charts | data quality indicators (beta) Fusion Landing Page | workspaces InField | customizable field observations (beta) InField | mobile data explorer InRobot | offline robotic mission planning (beta) Maintain | enhanced activity sequencing (beta) Maintain | shift support (beta) Streamlit | low-code applications (beta) DATA OPERATIONS Auth & Access Management | simplified user management 3D | contextualization and configuration enhancements Extractors | Kafka hosted extractor (beta) Data Workflows | data orchestration API Time Series | data quality status codes Time Series | improved data point queries (beta) These, and much more, can be found in our latest release notes. Check out all the additional features and improvements which will enable your teams to drive even more value from Cognite Data Fusion. We also recommend watching the June Spotlight video:   These new capabilities will be available to you on June 4th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! INDUSTRIAL TOOLS Cognite Search | comprehensive data exploration for industrial usersOfficial launch of Cognite Search. A streamlined search and data exploration for industrial users. This first major release includes sorting search results by properties, adjusting visible columns in search results, and allowing admins to set default filter combinations for all users. Additionally, the Location (Site) configuration now supports subtree and external ID prefix filtering. These updates simplify data exploration across the portfolio, making it more accessible for industrial users.In the 3D search experience, we’ve also released tools for measuring multipoints, areas, and volumes, as well as rules that trigger color changes on contextualized objects when specific criteria are met. These improvements offer better spatial awareness through advanced measurements and real-time visual alerts, significantly boosting operational efficiency and decision-making. Read more about it here  Industrial Canvas | collaborative troubleshooting and analysisOfficial launch of Industrial Canvas, a powerful tool designed to streamline collaboration on industrial data, allows users to integrate assets, time series, 3D models, and more into an infinite canvas. It includes markups, shapes, lines, versioning, canvas locking, commenting, and sharing capabilities, all with enhanced performance and interactivity. By enabling direct collaboration on contextual OT, IT, and ET data, Industrial Canvas facilitates quicker, higher-quality decisions and reduces the time spent on data collaboration, allowing more focus on production improvements.Read more about it here Charts | no-code time series monitoringOfficial launch of monitoring in Cognite Charts, replacing traditionally a time-consuming task for automation engineers. Users can now easily create thresholds on time series and set up alerts to get notified when these thresholds are breached. These enhancements reduce the burden of setting up monitoring, allowing anyone to monitor time series data. The alert system ensures proactive investigation when key equipment indicators deviate, improving overall efficiency and responsiveness.Read more about it here Charts | data quality indicators (beta)View time series data quality codes in Cognite Charts, such as “Bad” and “Uncertain”, on both raw and aggregated data points. Additionally, the raw data point limit has been increased to 100,000, and there is improved indication of gaps in calculations when dealing with bad or uncertain data. These updates enable users to trust the data they see, enhancing confidence in their decision-making processes and reducing the need to revert to source systems.Read more about it here Fusion Landing Page | workspacesPersona-based workspaces tailored specifically for Industrial Users and Data Managers address the previous challenge of a mixed interface. This update includes a revamped sidebar and home page, making it easier for users to find the tools and information they need right from the start. The new design simplifies onboarding for both Industrial and Data Expert users and lays the groundwork for future more personalized landing pages. InField | customizable field observations (beta)Configurable observations fit into field workers specific workflows. Ability to create Observations directly from their desktops, and use improved filtering and search capabilities. These enhancements make it easier to review media and discover high-critical findings. By customizing Observations to field workflows, users can ensure better actions and quickly address important issues, improving quality of data from the field, increase overall efficiency and reduce response times. InField | mobile data explorerOfficial launch of a mobile-only "Search" landing page improves access to relevant data in the field, addressing the issue of siloed information across different source systems. Field workers can now configure various InField workflows, such as observations and checklists, enabling or disabling them as needed. This update provides instant access to crucial data, offering a simple yet scalable solution for field workers. It simplifies deployment and can be expanded to accommodate additional workflows over time, enhancing overall efficiency and troubleshooting capabilities. InRobot | offline robotic mission planning (beta)Plan and create robotic missions offline using digital twins and contextualized 360 images, addressing the high costs and deployment delays associated with manual operator rounds. This feature enables users to define tasks and camera positions based on visual data, allowing the entire robotic mission to be prepared in advance. Once the robot is onsite, it can execute recurrent missions without any additional configuration, streamlining operations and reducing setup time. Maintain | enhanced activity sequencing (beta)Automated sequencing for activities based on defined dependencies and constraints, such as shift duration, day or night work, and overlapping activities, addresses the inefficiencies of manual and Excel-based resource estimation for shutdowns or maintenance campaigns. Users can now gather sets of work orders into multiple sequences and toggle them on and off to observe their impact on the tradematrix. These features improve resource planning, reduce execution delays, and eliminate the cumbersome manual work of creating work order sequences and corresponding tradematrixes. Maintain | shift support (beta)Enhanced Gantt functionality displays both day and night shifts within a single day, along with the corresponding tradematrix for each shift. This update addresses the previous limitation in Maintain, which made it difficult to assess plans involving multiple shifts within the same day. By providing a clear view of resourcing needs for both shifts, this feature improves resource planning, reduces execution delays, and eliminates the need for manual Excel work to create work order sequences and corresponding tradematrixes. Streamlit | low-code applications (beta)Users can now build low-code applications in Python using the Streamlit framework, simplifying the process of deploying applications. This capability addresses the time-consuming need for IT approval and hosting infrastructure. Users can now instantly deploy applications, making them accessible to non-coders. By lowering the burden of app development for citizen developers, this accelerates innovation and decreases the time required to share new solutions within their organization. DATA OPERATIONS Auth & Access Management | simplified user managementCDF admins can now manage user access by adding users directly to groups within CDF, bypassing the need for often time-consuming approval processes . Additionally, admins can create a "default" access group for new users. This update eliminates delays, minimizes errors, and reduces the burden on internal approval processes, ensuring a smoother and more efficient onboarding process. As a result, new users experience a better first-time use of the product.Read more about it here 3D | contextualization and configuration enhancementsNew 3D Scene Configurator aligns all 3D data correctly in relation to each other, addressing the challenge of viewing different models representing the same location together in one view. This improvement simplifies workflows and ensures accurate spatial relationships, providing a more immersive and precise representation of real-world environments for decision-makers in Cognite’s Industrial Tools or 3rd party applications developed with Reveal.Read more about it here Extractors | Kafka hosted extractor (beta)Support for Kafka messages as a hosted extractor eliminates the need for custom extractor development and deployment. This new extraction method allows users to connect to a Kafka broker and subscribe to a single topic directly. By supporting this popular event streaming protocol, the update provides instant connectivity, removing the need for downloading and installing extractors, and significantly streamlines the process for clients already using Apache Kafka.Read more about it here Data Workflows | data orchestration APIOfficial launch of the Data Workflows API allows users to orchestrate workflows that consist of Transformations, Functions, and CDF requests, along with their interdependencies. This update addresses the challenges of fragile pipelines, stale data, and difficulties in monitoring and scaling. By enabling the orchestration and monitoring of these workflows, users can minimize the effort required to manage data processes, significantly enhance the robustness and performance of data pipelines, and gain comprehensive observability of end-to-end pipelines rather than just individual processes.Read more about it here Time Series | data quality status codesRepresentation of time series data quality is now based on OPC UA standard status codes. This feature addresses the issue of users having to assume gaps in time series data are due to bad quality. By clearly indicating the quality status of each data point, users gain increased trust in the data. They can also choose how to treat good, bad, and uncertain data points in their calculations, preventing the automated removal of low-quality data during onboarding and ensuring more reliable data analysis.Read more about it here Time Series | improved data point queries (beta)Enhanced capabilities for retrieving data points and aggregates address the need for convenient time series data point queries. Users can retrieve data points based on a named time zone or specified time offsets in 15-minute increments. Additionally, data point aggregates can be obtained using Gregorian calendar units such as days, weeks, months, quarters, and years. This feature increases developer speed by providing a convenient API for data retrieval and improves the accuracy of queries by reducing the likelihood of code errors.Read more about it here 

Related products:InFieldProduct ReleasesMaintainPublic BetaInRobotIndustrial CanvasChartsSearch and Data ExplorationAPI and SDKsExtractorsData Workflows3DAuthentication and Access Management
featured-image

Data Model UI Updates - April Release

Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. An important aspect of these knowledge graphs is being able to explore and iterate on both its data and its structure.With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to better visualize the containers and views of your data model.  Isolate a view or containerYou can now isolate a view or container by clicking on an item, and choosing from the Quick Filter at the bottom right corner. You can decide between isolating the view or container by itself, or show all other views and containers related to the current selection. You can then further decide which related views and containers to be visible for the selected view or container.You can click “Reset” at the top in the search bar when you want to go back to the original layout when the page first loaded. Clearer relationships visualizationVisualizing relationships are clearer for self referencing relationships. As well, arrows clearly identify the direction of a relationship (circle indicating source, arrowhead indicating target) As well, there is a simple way to expand an item to just see the relational properties (edge or direct relations), without seeing all properties. Identifying all Views that is powered by a ContainerWhen choosing to view containers within a space, the selected container will list all the views that use the selected container. As part of the release, we have also added or fixed the following:Data management - displays data from reverse direct relations General - all spaces, containers and views are displayed, instead of just the first 1000

Related products:Data Modeling
featured-image
featured-image

Streamlining Access Management in CDF with Group Membership

Hello!😄 We're pleased to announce the introduction of group membership within groups in Cognite Data Fusion (CDF). This enables admins to add users directly to groups within CDF, simplifying the process of granting access and understanding who the members of a group are by checking it within CDF. It also significantly reduces the burden on the customer's IT team by minimizing (or eliminating) the number of tickets/requests they receive to create groups in their IdP, add users to these groups and then link the IdP groups to groups in CDF.After adding the required capabilities to a group in CDF, admins can now choose how to manage the group's membership - either internally within CDF or continue to manage it via the customer’s IdP, as before. Group membership managed within CDF (New)For group membership managed internally within CDF, there are two options: admins can grant access to capabilities in the group to all users in the organization or grant access to users by explicitly adding them to the group in CDF.1. Granting access by adding user to a group ​​​​​​CDF admins click on "Create" or "Edit" a group in CDF Adds or modifies all required capabilities and scopes Selects List of users and then adds users as members to the group. NOTECurrently, only users can be added as members to a group in CDF. Support for service account creation within CDF and adding them as members of a group will be available in the June releaseCDF admins cannot add a user to a group in a CDF project if the user has never logged in to the organization, as user profiles are only created upon successful login to an organization 2. Granting access to all users in the organizationCDF admins creates a group in CDF Adds all required capabilities with scopes Selects All user accounts option under members.CAUTIONUsers can access a CDF project within an organization if they are a member of at least one group in the project. Creating a group with 'All user accounts' in a CDF project grants access to the project, with the capabilities mentioned in the group, to all users in the organization Groups with "All user accounts" as members are prioritized at the top of the list in the group management UI to prevent any unintended access. We recommend creating groups with 'All user accounts' as members containing basic capabilities, such as view only (all read capabilities), to ensure a smoother first-time login experience for new users.

Related products:Authentication and Access Management
featured-image

Search, Data Exploration and Industrial Canvas │ New Features

Hi everyone! 👋Today we released a range of new functionality, improvements, and bug fixes in both Industrial data search, Data Explorer and Industrial Canvas. Thank you to all who have provided feedback and posted suggestions to us here on Hub - many of you will recognize your feedback incorporated into what gets released today.  Industrial data searchOur focus since the March release has been around incremental improvements, fine-tuning some interactions, tidying up the user experience, and in parallel, working on some bigger customization functionality that we will release at the end of Q2. For today, we want to share with you this;  We have added sorting of search results to make it easier and quicker to find relevant data. You can now sort data (ascending/descending) throughout the user interface. The list of categories can be sorted on name or count it contains The list of search results can be sorted on all properties of the category Left: Sorting on Categories. Right: Sorting on Properties. The list of categories has gotten a face-lift with a default max number of categories shown, and a “show more” option if the number of categories is high.  The filters and search queries added as part of your exploration journey are now preserved during your user session to ease the exploration experience. When you start from the list of search results, you can now add filters and/or a search to filter down the list of result, dig into the filtered data, and be able to return to the list of filtered search without the need for going through the filtering steps again.  A new Statistics functionality has been added, enabling you to swiftly create visualizations of your data. This feature lets you categorize data based on specific properties and visualize aggregated metrics such as counts and averages. Please note, however, that the Statistics functionality is not supported for all data categories - you can use this feature for the categories listed under Other categories in the left hand sidebar.  The Statistics functionality can be used to create visualizations of you data.Our AI Copilot has been further integrated into the search functionality, streamlining the process of identifying categories and creating filters through conversational language. When you know chose to use the Copilot, you will the results appearing in the same interface as when you perform regular search. Additionally, the Copilot is equipped to autogenerate visualizations of statistics, further enriching your data exploration experience. You still trigger the Copilot the same way as before, choosing “Ask Copilot..”. AI Copilot is now more integrated into the Search experience.Some smaller bug fixes include increasing the portion of text we show (like for descriptions), and showing the label name instead of the label ID (thank you for reporting this!) Industrial CanvasToday we have released an improvement to the experience for finding and adding data to a canvas, by unifying the experience with the Industrial data search. When clicking “Add data” you’ll now recognize the experience from Industrial data search, which gives a more visual way of browsing your data, before adding it to the canvas.You recognize how the experience of finding and adding data to your Canvas is unified with the Industrial data search experience.Data ExplorerThere are two main improvements released today, both a result of feedback received from you here on Hub.  To ease and speed up data exploration, we are now preserving the applied filter(s) and search query while you browse in the details mode (they are reset when you exit the details mode). When you apply filter(s) and/or Search query in details mode, these are preserved while you are exploring data  We have made some adjustments to filtering logic in the common filters. In the Metadata filter, you’ll find that there are several keys, each typically having several values each. If you select several values for the same key, these are treated with the OR operation, while between keys we now apply the AND operation. Examples:  In the Metadata filter, you select the key “network level”, and two values. This selection is treated as: network level = production_subsystem OR production_system.  If you select another key in addition, this selection is treated with the AND operation: network level = production_subsystem OR production_system AND product_type = Gross. Combining the Metadata filter with other common or resource-specific filters is treated with the AND operation. Here we have applied the filters: network level = production_subsystem OR production_system AND product_type = Gross Let us know if you have questions or feedback!Best, Sofie Berge, on behalf of Cognite Product Management

Related products:Product ReleasesIndustrial CanvasSearch and Data Exploration

Charts Updates and New Features

Hei Everyone! We're excited to announce the latest updates to Charts, designed to enhance your data visualization and analysis experience. In this release, we've focused on improving data viewing capabilities, introducing new features for better visual analysis, and enhancing monitoring job setup and validation. These enhancements are aimed at providing users with greater flexibility, control, and confidence in leveraging Charts for their data-driven decisions. a) Improved Chart Data Viewing and Trust in the data viewed - Increased the maximum number of raw data points viewable in a chart from 500 to 100,000.- Aggregate min/max shading is now enabled by default when switching to aggregate data. This substantial increase in data points allows for a more comprehensive analysis of data trends in RAW mode without the need to switch to aggregated views prematurely. This min / max shading being on by default should ensure that users are better aware of when they are viewing aggregated data. We hope that these changes should help build better trust in Charts and make the data look more similar to what is viewed in PI systems. b) Slider/Marker Functionality in Charts: - Users can set and delete multiple sliders/markers to compare values across various points in a time series. With this new functionality, users gain the ability to freeze values at specific points in time within a given chart. By setting multiple sliders/markers, users can effectively compare data values across different time series, enabling deeper insights into data trends and patterns. c) Backfilling of Monitoring Jobs - Receive a count of alerts that would have been triggered within the specified time frame when setting up a monitoring job   With this latest enhancement, users can now conduct historical runs of monitoring jobs before finalizing their configurations. Upon setting up a monitoring job, users will receive valuable insights into the number of alerts that would have been triggered within the specified time frame based on the configured thresholds. This functionality not only allows users to verify the effectiveness of their threshold values but also helps prevent the potential spamming of alerts by ensuring that thresholds are set at reasonable levels.  

Related products:Charts

Exciting Updates: Enhanced Data Integration in Cognite Data Fusion (CDF)

Greetings, everyone!We're thrilled to share some exciting updates regarding our ongoing efforts to enhance data integration within CDF (Cognite Data Fusion). These improvements are aimed at streamlining your experience and empowering you with more capabilities. Let's dive right in!New Features for the File Extractor:  1. OpenText Documentum REST Services Support:We've expanded the capabilities of the Cognite File Extractor to now seamlessly integrate with OpenText Documentum REST services, so that you can connect to multiple sources by only configuring one extractor. Check out the documentation for configuration details here.2. Advanced Filtering:Enjoy enhanced client-side filtering options based on metadata, allowing for precise filtration and complex rules tailored to your needs.3. Improved SharePoint Integration:Say hello to improved SharePoint support! We now offer support for multiple SharePoint sites, dynamic site discovery, and the flexibility to configure sites via URL. Plus, we've added the capability to detect deleted files, ensuring your data remains up-to-date.4. Enhanced File Handling:Say goodbye to size limitations! We now support files larger than 5 GB, facilitating seamless handling of larger files. Updates and Changes: 1. Continuous Background Uploads:File uploads now occur continuously in the background and in parallel, resulting in significantly faster speeds. Additionally, file traversal on sources has been optimized for a smoother experience.2. Revised Configuration Schema:We've refined the configuration schema for FTP and SFTP, aligning it with other file providers for consistency. Refer to our documentation for detailed parameter names.3. Improved Retries and Handling:Experience more robust retries and improved handling of interrupt signals, minimizing disruptions during operations.4. External IDs Enhancement:We've revamped the external IDs for local and Samba files, now based on file paths instead of underlying inodes. This ensures a seamless transition if you're migrating from a previous version, meaning that your files will be given a new external ID if you are migrating from version 1. Deprecation of Old Documentum Extractor:By adding support for D2 in the Cognite File Extractor, we are discontinuing our Documentum extractor. We encourage users to transition to the more robust File Extractor. While the Documentum extractor will continue to function as usual, please note that bug fixes and support will no longer be provided. Introducing Public Beta Service for Azure EventHub and Kafka:We're excited to introduce a public Beta service to connect with Kafka. Seamlessly access Kafka messages with minimal configuration, and enjoy full hosted integration service within CDF.Thank you for joining us on this journey of innovation and excellence! Stay tuned for more updates and improvements on the horizon. Warm regards,Elka Sierra  

Related products:Extractors
featured-image
featured-image

Data Modeling UI Updates - March Release

Data Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. An important aspect of these knowledge graphs is being able to explore and iterate on both its data and its structure.With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to better display the underlying concepts that power your industrial knowledge graph. With this increased visibility into your models, you can have higher understanding and confidence in iterating the structure of your data model.Space centric exploration of containers and views Explore containers and view information for the data model. Data management summary + sidebarSpace centric exploration of containers and viewsExplore the structures that powers your data model by looking at the space they reside in.When in a space, you can choose the container and view to explore their structure, and the data within. Explore containers and view information for the data model.Within the visualizer and the data model editor, you can now explore the definition of the view, and explore the containers that power them.Beside just the raw configuration, you can also explore them more in a more relational manner via the data model visualizer. Data management summary + sidebarIn the data management tab, you can explore data easier with the new sidebar functionality that let’s you look through each instance and their relationship (via double clicking on the property you want to delve deeper into)Additionally, a new profile view enables you to better understand the totality of your data. Get an aggregated summary of the data, grouping by whichever property you would like. As part of the release, we have also added or fixed the following:Data model statistics - fixed progress bar in summary not showing correct amount of nodes and edges Data modeling - unit directive support Data model visualization - inverse relationships not rendered correctly Data management - instances not visible for views that inherited properties from another view Data management - filters with dates, timestamps and sorting are working as expected Data connection - new excel connection string option via “Connection” panelLet us know if you have any questions or feature requests regarding the release below!

Related products:Product ReleasesData Modeling

Product Release Spotlight - March 2024 Release │ Cognite Data Fusion

Hi everyone! 👋The next Cognite Data Fusion product release is soon approaching. Together with the return of the sun (for those of us in the icy north), we are excited to announce several new upcoming features across our most popular user interfaces and DataOps capabilities.This post will walk you through selected highlights from the release:Increase efficiency of troubleshooting and root cause analysis with updates to Industrial Canvas Improve productivity in Jupyter Notebooks with enhanced AI co-pilot and collaboration Accelerate solution development with new Unit Conversion and Time Series enhancements Increase robustness of data pipelines with Data Workflows Improve performance of your Industrial Knowledge Graphs with upgraded Data Modeling UIThere’s also much more to explore and discover. Dive into the latest release notes and uncover all the additional features and improvements designed to take your industrial data journey to new heights. These brand new additions will be available to you on March 5th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here in the community, to help us understand how we can continue to evolve Cognite Data Fusion to fit your needs. Let's keep this momentum going! Here’s a short video summarizing the release highlightsIncrease efficiency of troubleshooting and root cause analysis with updates to Industrial CanvasWhen troubleshooting complex issues, a lot of time is spent on finding the right data and sharing it across disciplines. Collaborating in a visual, seamless way reduces the time and effort to resolve issues and get your systems back to optimal performance. This release we are introducing new features to Industrial Canvas which improve support of troubleshooting and root cause analysis for our industrial users. In Industrial Canvas, we are enhancing sticky notes so that you can build cause map diagrams for your root cause analysis. Another highly requested feature has been a tighter integration and user experience between Industrial Canvas and Charts. You will now be able to open and view your Charts directly in Industrial Canvas, collecting all essential data in one place, and removing the need to go back and forth between browser tabs.  Improve productivity in Jupyter Notebooks with enhanced AI co-pilot and collaborationJupyter Notebooks in Cognite Data Fusion allows you to write and run Python scripts without leaving your browser. The notebooks use your current user credentials when logged in to Cognite Data Fusion, which removes the need for additional authentication steps and risk of getting access to restricted data. Since its initial launch late last year, Jupyter Notebooks has become an increasingly popular feature for our more technical users such as data scientists and engineers, being used to explore data and prepare for building data pipelines and solutions.In this release we are further improving Jupyter Notebooks’ AI copilot. The copilot has received a major visual overhaul making it simpler to use, together with elevated knowledge of Cognite Data Fusion and its Python SDK increasing its accuracy. You will also find an upgraded user interface showing where notebooks are stored. This allows you to differentiate between notebooks stored inside or outside of Cognite Data Fusion, enabling sharing of individual notebooks and prevention of potential data loss.  Increase robustness of data pipelines with Data WorkflowsAll data in Cognite Data Fusion rely on industrial data being efficiently onboarded and contextualized, while maintaining the quality and trust in the data. Data managers, data engineers, and data scientists usually manage several data processes to prepare Industrial Knowledge Graphs and other solutions for their organization’s consumers. The introduction of Data Workflows allows you to orchestrate execution of Cognite Data Fusion’s data processing features, such as Transformations and Functions. This enables  significantly increased observability and control of the end-to-end data pipelines, leading to improved performance and faster scaling. Accelerate solution development with new Unit Conversion and Time Series enhancementsCognite Data Fusion’s core data resource types, such as Data Modeling and Time Series, are extensively used in creation of Industrial Knowledge Graphs and solutions tailored for solving industry specific problems. Enabled by Cognite Data Fusion’s DataOps capabilities and SDKs, solution builders are often setting up calculations and similar processes to prepare their industrial data to fit their use cases. In this release, we are introducing a set of new features for Unit Management and Time Series aiming to expand this toolkit. These new features cover several commonly requested gaps in Cognite Data Fusion, which we believe will simplify and accelerate the experience with exploring and consuming relevant data, and deploying solutions on top of it.Building upon our latest Unit Management features, which allows you to assign units of measure to Time Series instances and Data Modeling properties, you will now be able to consume data in the unit of measure relevant to you. This allows you to spend less time on error prone unit conversions, both for simple and one-off data exploration to more complex solution building.For Time Series, we are introducing Subscriptions and Data Quality Status Codes. Up until now, our users have had to download the full range of data points to get updates on new or changed data points. This increases the risk of performance challenges, especially for solutions or processes dependent on large data volumes. The new Subscriptions feature allows you to set up one or more Subscriptions across Time Series, and obtain only the latest and changed data instances that are to be used in e.g. dashboards or calculations. Data Quality Status Codes allows you to represent the data quality of Time Series data points according to the OPC-UA standard. This provides you with the option to choose how to handle data instances not considered good enough, instead of them automatically being removed when being ingested into Cognite Data Fusion from source systems. The added insight into the quality of data instances will increase trust in the industrial data, both for solution builders and data consumers. Improve performance of your Industrial Knowledge Graphs with upgraded Data Modeling UIData Modeling provides you with the flexibility to define your own Industrial Knowledge Graphs based on relevant industry standards, your organization’s own data structures and use cases, or a combination of all of these. Large, and often complex, Industrial Knowledge Graphs might be needed to represent the full extent of your industrial data across disciplines. This increases the risk of the Industrial Knowledge Graph having reduced performance for its consumers, such as query timeouts and slow response times.With this release, we are enhancing the Data Modeling user interface in Cognite Data Fusion to show how relationships between data is stored, and what can be queried. This allows you to better understand how you can optimize your graph to delivert the best possible performance and stability for your consumers.You can find more detailed information on the updates in the release notes on our Documentation portal.

Related products:Product Releases
featured-image

Embark on your data science journey with Jupyter Notebooks in Fusion

Jupyter is a tool loved by data scientists, analysts, and engineers, particularly for its interactive computing environment and robust data processing capabilities. This platform allows users to dynamically write and execute code, visualize results in real-time, and efficiently handle large datasets, making it an essential tool for in-depth data analysis and exploration. Using Jupyter Notebooks in Fusion, you can now write and run Jupyter Notebooks in Fusion without leaving your browser. Notebooks can be stored in Cognite Data Fusion (CDF) allowing easy access and sharing. All code is executed in a “sandbox” within your browser, reducing the chance of leaking information. In this sandbox, notebooks do not have access to your local files or information stored on your computer. All communication with CDF and notebook code is executed under the current user credentials, so Jupyter cannot be used to gain access to restricted data stored in CDF. Paired with our new GenAI code copilot, we have made it easier than ever to get productive with CDF. The copilot will help you write and explain code. Soon, the copilot will also help you fix code errors.If you are new to Jupyter, we recommend first reading the official JupyterLab introduction guide before reading the rest of this guide.Join the Beta: Your Feedback MattersJupyter Notebooks in Fusion is in public beta. As with any beta program, some level of instability and unforeseen issues may be expected. We strongly encourage users to actively participate in this beta phase by providing feedback, which is invaluable to us. For product interaction and reporting bugs, please visit Cognite Hub. If you have ideas for improvements you would like to share, please create a new product idea on Cognite Hub. Your contributions are not only welcome but essential in making JupyterLab in Fusion a more robust and user-friendly tool.Using Jupyter Notebooks in FusionThis chapter guides you through the process of using Jupyter Notebooks within the Cognite Fusion environment, detailing how to access, store, and manage your notebooks. You will also learn how to authenticate with CDF to access information stored there and how the new copilot can help you accelerate the process of writing code.Jupyter Notebooks is available under the Explore menu in Cognite Fusion. Once started, previous Jupyter Notebooks users will recognize the familiar user interface. Notebooks and files are accessible in the file browser. StorageThe first time you open Jupyter Notebooks, you will notice a few folders already available:examples/ - This folder contains some interesting examples of things you can do with JupyterLab in Fusion as an inspiration. quickstart/ - A quickstart guide for getting started with data modeling and population of data. CDF ☁/ - A special folder for loading and storing notebooks to Cognite Data Fusion. There are two different ways of storing notebooks and data in Jupyter Notebooks in Fusion; locally or in Cognite Data Fusion (CDF). By default, all files and notebooks are stored within the browser storage. Since this storage can be cleared (e.g. as a result of clearing the browser cache) it is recommended to only use this storage for small notebooks that temporary files you can afford to lose. Locally stored notebooks are accessible across projects - that is, you will have access to notebooks from CDF project A when you are working in project B.To store persistent notebooks and share work with your colleagues, use the CDF-folder. Inside this folder, you will find each data set you have read-write access to. If you already have a data set you want to store notebooks in - great! If not, see section below for creating data sets for notebooks. To save a notebook to Cognite Data Fusion, simply place it inside the respective folder in the file explorer in Jupyter. Note that only notebook files with IPYNB-extension will be saved to CDF - no other files will be synchronized.Authentication helpersWhen using Jupyter Notebooks in Fusion, it’s recommended to use the default authentication helpers provided. To use this helper, simply add the following code to your notebook:from cognite.client import CogniteClientclient = CogniteClient()This authenticates using the current user credentials, meaning that any code that is run in Jupyter Notebooks in Fusion when using the authentication helper has the same access permissions as the user.GenAI code assistantJupyter Notebooks in Fusion comes with a useful coding assistant. This assistant will help you write code based on natural language input and explain existing code. Later it will also be able to help you fix code errors.The code copilot is accessible from the cell editor in Jupyter. You can use the copilot to generate and explain code. Later we will also add functionality to help you fix code with errors.Best practices for succeeding with the code copilot includes:Be explicit. If you want the copilot to work on data you retrieved in previous cells, state the name of those variables. You also might want to hint as to what type of those variables are, especially if it's not obvious from the code in prior cells. Use Cognite Data Fusion lingo to help the copilot understand what APIs are relevant. An example of this is that if you are retrieving work order data, provide information on how to retrieve this data - not just instruct the copilot to retrieve work orders. Instruct the copilot to do one single task per cell. For performing additional tasks, add additional code cells. Manually combine code from multiple cells after verifying that the code works. Write instructions in English. Other languages might work, but expect best accuracy when using English. Always read through the generated code before running to make sure the code doesn’t perform harmful actions. This is especially important if you have elevated access rights to CDF to ensure data integrity. Expect to be required to change parts of the generated code. The copilot will make mistakes, but can help you point in the right direction.Some examples of prompts you might want to try out is:Initialize a Cognite client Retrieve all root assets Search for documents by "Operating procedure" and print content of the 5 first documents Fetch time series value for the last year for time series with external ID "XYZ". Plot the time series and print min, max and mean values.Note that this is an early version of the code assistant. Expect accuracy to improve as we develop our coding assistant over the coming months.GenAI data insightsWe also introduce a new Python package, cognite-ai, that will help you use GenAI not only as a coding assistant, but also to simplify data insights in your code. This package is based on PandasAI and allows you to “talk to your data frames” from CDF An example notebook is available under the examples/-folder.Note that this Python package also can be used outside Jupyter Notebooks in Fusion. This package is in an experimental state.Security considerationsWhen you are using the default authentication helper, all code executed in Jupyter Notebooks in Fusion is run under the current user credentials meaning Jupyter Notebooks will not provide access to any data the user doesn’t already have access to.Notebooks are stored in CDF and/or browser storage. Local cached versions of notebooks are kept in browser storage. It’s advisable to clear notebook cell outputs if sensitive information is listed here.Never store secrets in notebooks, even for private notebooks. Limitations and known issuesKnown issues:Note that due to restrictions in the underlying technology, you will only be able to have one browser tab/window with Jupyter Notebooks open at the same time. The link generated from “Copy sharable link” file context menu does not work Packages with native libraries are not supported. When you see an error message like this: Can't find a pure Python 3 wheel for ‘<package>’. See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel. You can use `micropip.install(..., keep_going=True)`to get a list of all packages with missing wheels. it typically means that the package requires a native library and cannot be used with Pyodide. See Pyodide document for more information. It’s not possible to interrupt execution of notebook cells.Advanced: preparing datasets for sharing notebooksIn order to store notebooks in Cognite Data Fusion (CDF), you must have access to reading and writing files to a Data set. To share notebooks within your organization all collaborators must have file:read and file:write ACLs. Access to managing Data sets is usually restricted, and you might need a CDF administrator to help you prepare Data set(s) and manage access. 

Related products:Jupyter Notebooks
featured-image

Product Release Spotlight - January 2024 Release │ Cognite Data Fusion

New year - New release! Welcome to 2024, the year that brings easy accessibility and seamless utilization of complex industrial data, thanks to the power of Generative AI.   Easier to get started with the new landing page on Cognite Data Fusion Finding data for industrial users is now vastly improved in the new search experience (including both (F)DM and Asset Centric) Engineers doing collaborative troubleshooting and root cause analysis are supercharged by key features requested by our core users in Industrial Canvas In direct response to your feedback, we’ve improved monitoring of equipment and processes with editing, highlighting and more context in alert emails And finally for all industrial users, first class Units support! And that’s not all, there’s much more to explore and discover. Dive into the latest release and uncover all the additional features and improvements designed to take your industrial data journey to new heights. These brand new additions will be available to you on January 9th 2024. We would love to hear your feedback here on Cognite Hub over the coming weeks. We also want to thank all our community members for your contributions so far. We're eager to continue this collaborative process with you, seeking your valuable input on both existing and upcoming product features. We encourage you to continue to submit your Product Ideas here on the community, to help us understand we can continue to tailor Cognite Data Fusion to your needs. Let's keep this momentum going! Here’s a short video summarizing the exciting updates  Easier to get started with the new landing page on Cognite Data Fusion Working in critical roles we know how important it is to get started fast, and get straight to work. This is why we are launching a new, focused landing page for Cogniter Data Fusion with quick access to Search, Industrial Canvas and Charts. Your most recent work is highlighted for you to get right back where you left off. All of your powerful features of Cognite Data Fusion are available in the menu, with quick access to what you need.  Finding data for industrial users is now vastly improved in the new Search experience With the launch of the brand new Industrial data search experience we are simplifying how you can search, browse and find your industrial data across different types of data. Search and browse across 3D, sensor data, equipment, documents, work orders and more, all in one place. The refreshed experience is optimized for industrial end users looking for quick answers across complex data. This includes built in Generative AI search where you can use natural language to get simple answers to complex questions powered by your industrial knowledge graph.   Supercharge your visual troubleshooting with Industrial Canvas When troubleshooting complex issues, most time is spent on finding the right data and sharing it across disciplines. Collaborating in a visual, seamless way reduces the time and effort to resolve issues and get your systems back to optimal performance. With this release we are introducing more control when collaborating visually in Industrial Canvas. You can lock down root cause analyses to prevent changes after an issue is resolved. You can also roll back to earlier versions of an analysis through automatic snapshots of your work. Enrich your P&IDs and technical documents with a built in symbol library, and export all markups as PDF.Here you will also be guided by Generative AI, where all documents can be summarized and interrogated for questions. Long, technical documents can give you structures answers to your questions, with references to where in the document the answer was extracted. Improved Monitoring of equipment and processes When monitoring critical equipment and processes, you now get more relevant context and information in alerts emails. You can jump straight into the relevant chart and start troubleshooting, or unsubscribe directly from the email. You also have the ability to edit and tune your parameters of ongoing monitoring jobs, improving the overall workflow of monitoring your plant.  First class Units support Across our APIs and Python and Javascript SDKs we now have full support for industrial units through our Cognite Unit Library. All features and examples are described in our documentation and examples.You can find more detailed information on the updates in the release notes on our Documentation portal. 

Related products:Product Releases

Document Parsing - Using generative AI to assist in Data Management

Dear Community,We would like to provide an update on our ongoing efforts to address the challenges associated with unstructured data. We have developed this closely with AkerBP while concurrently identifying areas where such a tool could deliver significant business value. Throughout the process we have identified that this tool could be used in the following applications such as Material Master data management, Brownfield modification, and management of Asset Lifecycle Information. Significance of the Initiative:The primary benefit is a reduction in the time spent on wrestling with tedious data extraction, allowing users to allocate more time to meaningful tasks. Our document data extraction tool not only saves time but also serves as a safeguard against errors often associated with manual input. Furthermore, we are looking to develop tooling and processes to mitigate and eliminate errors generated by the LLM.Issues Addressed:Problem: Manual data work resembling an endless endeavor. Key Value Driver: Time savings  Problem: Errors and discrepancies resulting from a high volume of manual input Key Value Driver: Error minimization Problem: Applications hindered by sparse data Key Value Driver: Establishment of a robust data foundation for applicationsOur focus is on streamlining processes to establish a singular, definitive digital version, eliminating the need for navigating through multiple iterations. This approach adds tangible business value, particularly in terms of time savings and error reduction, benefiting users who rely on accurate data for various applications. The implementation aims to provide enhanced efficiency and reliability in data handling.For more information on the Data Management journey in Aker BP, feel free to watch the webinar here.

Related products:Contextualization

Deprecation Notice for AIR and Introduction of New Monitoring Capabilities in Cognite Charts

Hi CDF Users,This is a reminder notice for the deprecation of AIR. The alpha-level maturity monitoring solution AIR will be deprecated on 2023/12/31 and we will no longer provide support for the AIR after this date.To ensure a seamless transition and to provide you with improved monitoring capabilities, we are excited to introduce our new solution in Charts which is in open beta and is now available to all customers.Key Features of Monitoring in Charts:Enhanced User Experience: Monitoring in Charts has a better user interface, providing a user-friendly experience for proactive troubleshooting and root cause analysis. Integration with No Code Calculations: Our new solution allows for effortless integration with no code calculations, streamlining the monitoring process and empowering users to derive insights without the need for extensive coding.Stability and Reliability: Monitoring in Charts is built with stability and reliability in mind.Scalability: Monitoring in Charts is built on a backend API that is better suited for scaling making it an overall better solution instead of AIR as your monitoring needs grow. We believe that the transition to Charts will bring significant benefits to your monitoring needs. We appreciate your understanding and look forward to continuing our partnership with you as we embark on this exciting new chapter in monitoring capabilities within CDF.  

Related products:Charts